Outside The Matrix: Tiina Likki

Tiina LikkiDr Tiina Likki is a Senior Advisor at the Behavioural Insights Team in London where she focuses on labour market and welfare policy. Prior to joining the BIT, she completed a PhD in social psychology at the University of Lausanne where her research focused on public attitudes towards the welfare state in Europe. She also helped set up Tänk, a think tank that aims to introduce to an evidence-based, behavioural science approach to public policy in Finland.

Tell us about your work: how does decision making psychology fit in it? 

I’m a social psychologist by training and work for the Behavioural Insights Team (BIT). BIT is a former UK government unit, now a social purpose company that applies behavioural science to policy-making. My current focus is on employment and health and, therefore, I spend a lot of time looking at how the government could better support people getting back to work, or to stay in their current jobs. As a social purpose company, we cover a range of policy areas including education, financial and consumer behaviour, crime, international development, and energy and sustainability. Helping people and societies achieve good outcomes in these areas often boils down to supporting people in ways that allow them to make the right decisions.

Why you decided to go into industry instead of continuing in academia?

Towards the end of my PhD, I became increasingly passionate about the popularisation of science and evidence-based policy. I felt that findings from social psychology and behavioural science were incredibly important and that they should be more widely available for everyone to use. Some academics such as Richard Thaler and Carol Dweck do a great job of sharing their findings through accessible books, but many never make it from journal pages to policy-makers’ reading lists. In my current capacity, I am able to share and apply this vast scientific knowledge to deal with issues that affect large parts of the population. At BIT I have the benefit of being able to run randomized controlled trials and maintain close ties with academics, so I feel like I’m getting the best of both worlds.

What do you enjoy the most in your current role?

I enjoy how the role requires me to look at things from many different perspectives – those of the user, the client, the academic and the civil servant. This requires developing different skillsets in parallel, which can be challenging, but also very rewarding. I get incredibly excited when I get to apply the latest evidence to real issues. For example, I have been reading a lot on mental contrasting and implementation intentions which describe how to set effective goals, maintain the motivation to pursue them, and ensure you take the necessary steps to achieve your goal. I have been using this literature to develop coaching methods for people who are unemployed. I recently came across an article on the same methods in the Harvard Business Review. It is fascinating that the same theories can be applied to both jobseekers who have been out of work for a long time, and to high-level professionals looking to advance in their careers.

Do you see any challenges to the wider adoption of decision making psychology in your field?

I feel that there is an increasing openness among policy-makers globally to make behaviourally informed policy. The huge interest created by the recent BX2015 conference was a really positive sign. In the UK, having the support of past and present senior civil servants, such as Jeremy Heywood and Gus O’Donnell, has really helped a wider audience to see the value in behavioural insights. In my experience there is a real interest among civil servants to learn more, and the number of senior decision makers who have read books like Thinking Fast and Slow and Nudge has grown steadily.

How do you see the relationship between academic researchers and practitioners?

This is a relationship where everyone stands to win from engaging genuinely with each other. Practitioners can gain some truly useful tools and ideas, as well as support in evaluation, while academics can gain an understanding of where their research will have the biggest demand and impact. There is certainly room for more academic institutions to run workshops inviting representatives from policy and industry to share their challenges. Similarly, students could learn a great deal from hands on projects that allow them to apply the behavioural theories they have learned.

What advice would you give to young researchers who might be interested in a career in your field?

If you are about to start academic work in this area (as a student or researcher), see if there are ways to partner with another organisation for field work or results sharing. This will give you a taste of running more applied projects and will help determine whether you enjoy it or whether you prefer to stay in a more traditional academic setting. If you enjoy the experience and decide to move into industry or policy, your applied research experience will give you a strong head start.

BIT profile | LinkedIn

Outside the Matrix: Lizzy Leigh

Lizzy LeighContinuing our interview series with people who have moved into industry after completing their PhD, this week we speak to Lizzy Leigh, a behavioural research analyst at Swiss Re in London. She has a PhD in Health Psychology from University College London, focusing on the psychological aspects of recovery from coronary artery bypass graft surgery.

Tell us about your work: how does decision making psychology fit in it? I work for one of the largest reinsurance companies globally.  Reinsurance is insurance for insurance companies, and as well as taking on the majority of the risk, we act as consultants, employing experts from a range of fields, who can advise on medical conditions (for critical illness insurance), predictors of longevity and mortality (for life insurance), occupational therapists (to help get claimants back to work) and so on.  My background is in psychology, and I work in a research and development team of about 10 (the majority outside of the UK) academics who take on pieces of research that will be beneficial for our clients, the insurance companies.  Though my PhD is in health psychology, I now work almost exclusively in behavioural economics.  We take insights from behavioural science and apply them to all areas of insurance where decisions are made, through live field trials with our clients.  For example, we suggest changes to underwriting (application) form questions regarding health behaviour and medical conditions, to try and encourage the applicant to be honest and accurate in their answer.  We also suggest changes to letters, websites, telephone scripts and apps, helping our clients to achieve what they’re looking for, be it better retention, faster turn around of claims or click through rates on a website.  We have the mantra of ‘test, test, test’ and we are now beginning to get some early positive results back.  So we directly apply concepts from the literature into practice, improving what we know as the findings come back.

Why you decide to go into industry instead of continuing in academia? I was in academia for 5 years after my undergraduate (psychology) and masters (health psychology), working as a research assistant in two different departments for 2 years then completing my PhD for the final 3. My decision to leave academia wasn’t a certain one. My current job was brought to my attention by a fellow PhD student, but the application deadline was the day after I submitted my thesis and the first interview was the day after my mock viva (in which my thesis was ripped to shreds). So I didn’t have much time to prepare, and didn’t think for one second that I would get the job. The company I work for is relatively unique in the way it combines academia with business, employing lots of people with masters in all sorts of subjects, and the occasional person with a PhD too, and I had no idea the opportunity existed. I now see how low my confidence was at the end of my PhD and how much higher it is now I don’t work in academia any more, and I really enjoy working in a job where I am valued and given loads of new opportunities, which is exactly what I needed following the gruelling final year of my PhD.

What do you enjoy the most in your current role? When interviewed for my role I was asked, how did I feel I would cope with moving from focusing on one project for 3 years to dealing with several all at once in the new job. I said I was looking forward to the challenge, but now that I’ve settled in I think this is one of the things I enjoy the most. That, and the fact that application of the theory happens pretty quickly. In fact, there’s barely enough time to read the literature before I’m expected to make suggestions about a client’s website/letter etc. The results still take a while to come, but knowing the client is happy and you got great results make them worth the wait.

Do you see any challenges to the wider adoption of decision making psychology in your field? We have a few challenges in terms of: a) client buy in, and b) client willingness to try out tests, but even if we get great results and convince everybody that the new behavioural economic informed ways of communicating are better than the originals. Insurance is in many ways an old fashioned, and slow to keep up industry. Underwriters have decades of experience in working out how to ask people questions in a way that will elicit accurate information, and don’t necessarily agree with suggestions of new ways to do it. The majority of life insurance policies in many markets are sold via agents or independent financial advisors. If those guys don’t read the question as we’ve redesigned it then we have little control over that. Perhaps as there is a gradual move towards policies being sold online we will have better control of our application of behavioural science, but whether we like it or not, if someone wants to deliberately not disclose the truth, there’s little that insurance, as the industry currently stands, can do about it.

How do you see the relationship between academic researchers and practitioners? The best insight I can give here is on the relationship between academic researchers and people who work in industry who are interested in research. The company I work for is very interested in topics surrounding epidemiology, medicine, economics and so on and so we have collaborations with researchers in those fields and others. From my time here I’ve been gaining some insight on how academic researchers perceive private sector companies who could possibly fund academic research and vice versa. I have been surprised and pleased to observe a very mutual respect from both sides, and not the stereotypical perceptions you might expect.

What advice would you give to young researchers who might be interested in a career in your field? My perception is that it’s such an interesting topic, it’s quite possible to pick it up quickly, so I would recommend not spending too long trying to understand the theory but try and get stuck straight in with doing testing (I’m sure plenty will disagree). I also strongly recommend thinking outside the box of where you could work. I have since met others using behavioural science in the insurance setting, and I believe there is a place for it in every industry, so think how you could take it to that industry.

LinkedIn | Twitter | Academia.edu

Outside the Matrix: Dan Lockton

danlockton_5This week we’re returning to our Outside the Matrix series with Dan Lockton who is a senior associate at the Helen Hamlyn Centre for Design, a specialist research centre at the Royal College of Art in London, and does freelance work as Requisite Variety. He received his PhD in Design for Behaviour Change from Brunel University, based around the Design with Intent toolkit, and has worked on behavioural research projects, particularly on energy use, at the University of Warwick and at Brunel, before his current role in a collaborative project between the RCA, Imperial College London, the Institute for Sustainability and a number of European partners. Before returning to academia, Dan worked on a range of commercial product design and R&D projects; he also has a Cambridge-MIT Institute Master’s in Technology Policy from the University of Cambridge (Judge Business School), and a BSc in Industrial Design Engineering from Brunel.
Tell us about your work: how does decision making psychology fit in it? All design necessarily embodies models of people’s behaviour—assumptions about how people will make decisions, and behave, when using, interacting with or otherwise experiencing products, services, or environments. It’s a fairly basic component of design, although it’s perhaps only rarely considered explicitly as being about decision making psychology. Whether or not designers think about their work in these terms, it is going to have an impact on how people behave, so it’s important to try to understand users’ decision processes, and how design affects them (or should be affected by them). So both in research projects themselves, and in teaching design students how to do ‘people-centred’ design research, psychology plays a big role in my work.

Understanding how different people make decisions, through research in real contexts, becomes even more crucial when trying to do ‘design for behaviour change’, of course. You end up (hopefully) confronting and questioning many of the models and assumptions that you previously had, and develop much more nuanced models of behaviour which usefully preserve the variety of real-life differences.

In my current main project, SusLab (which is a small part of a major European project), I’m working with Flora Bowden on reducing domestic energy use through a combination of technology and behaviour change, but we’re taking a much more people-centred approach than much of the work in this field has done previously—doing ethnographic research with householders to uncover much more detailed insights about what people are actually doing when they are ‘using energy’—the psychology of the decision processes involved, the mental models people have of the systems around them, and the social contexts of practices such as heating, entertainment and cleaning. We then co-design and prototype new products and services (somewhat grudgingly termed interventions) with householders, so that they are not test subjects, but participants in developing their own ways of changing their own behaviour. This is the Helen Hamlyn Centre for Design’s forté —including people better in design processes, from ageing populations and users with special needs to particular communities underserved by the assumptions embedded in the systems around them.

Reducing energy use is a major societal challenge—there is a vast array of projects and initiatives, from government, industry and academia as well as more locally driven schemes, all aiming to tackle different aspects of the problem. However, many approaches, including the UK’s smart metering rollout, largely treat ‘energy demand’ as something almost homogeneous, to be addressed primarily through pricing-based feedback, rather than being based on an understanding why people use energy in the first place—what are they actually doing? We think that people don’t set out to ‘use energy’: instead, they’re solving everyday problems, meeting needs for comfort, light, food, cleaning and entertainment, with a heavy dose of psychology in there, and sometimes with an emotional dimension too.

Equally, people’s understandings—mental models—of what energy is, and how their actions relate to its use, and their use of heuristics for deciding what actions to take, are under-explored, and could be extremely important in developing ways of visualising or engaging with energy use which are meaningful for householders. This is where ethnographic research, and in-context research on decision-making in real life, can provide insights which are directly useful for the design process.  

The overall project covers a broad scope of work and expertise, including environmental scientists and architects alongside design researchers, and benefits from ‘Living Lab’ instrumented houses in each country, which will provide a platform (albeit artificial) for demonstrating and trialling the interventions developed, before they are installed in houses in real life.

How did you first become interested in decision making psychology? I first got interested in the area while doing my Master’s back in 2004-5. For my project, I was looking at how technologies, and the structure of systems, have been used to influence (and control) public behaviour, and as such, approaches such as B.J. Fogg’s Persuasive Technology were very relevant. While Persuasive Technology has tended not to employ ‘behavioural economics’ techniques too much, it was initially through this angle of ‘persuasion’ that I read people like Robert Cialdini, then followed the thread through to learn more about cognitive biases and heuristics, from authors such as Scott Plous, the Russell Sage Foundation-supported collections of Tversky, Kahneman, Gilovich, Slovic et al’s papers, then Gigerenzer and the ABC group’s work. Herbert Simon’s work has also been a huge influence, because his multidisciplinarity enabled so many parallels to be drawn between different fields. It was partly through his work, I think, that I became interested in cybernetics and this whole body of work from the 1940s onwards which attempted to draw together systems across human psychology, technology and nature, but which in public consciousness seems mainly to be about people with robotic hands.

In parallel, I was familiar with concepts such as heuristics, affordances and mental models from the cognitive ergonomics literature, one of the other main intersections between design and psychology. Here, the work of people such as Don Norman and Jakob Nielsen is hugely influential; this had first become interesting when I was in industry, working on some products which really would have benefitted from a better understanding of the intended customers’ perceptions, thought processes, needs and abilities, and I was hungry to learn more about how to do this. The idea of applying psychological insights to the design process greatly appealed to me: I had something of an engineer’s mindset that wanted, Laplace’s demon-like, to be able to integrate all phenomena, social and physical, into something ‘actionable’ from a design standpoint. While I now appreciate my naïvety, the vision of this ‘system’ was a good inspiration for taking things further.

For my PhD—supervised by David Harrison (Brunel) from the ‘design’ side and Neville Stanton (Southampton) from the ‘psychology’ side—I tried to bring together insights relevant to behaviour change from lots of different disciplines, including behavioural economics, into a form which designers could use during design processes, for products, services and environments, with a focus on influencing more sustainable and socially beneficial behaviour. Various iterations were developed, via lots of workshops with designers and other stakeholders, ending up with the Design with Intent toolkit. This is still a work in progress, though it’s had to take back seat to some more practical projects in the last couple of years, but I hope in 2014 to be able to release a new version together with, perhaps, a book.

Why you decide to stay in academia instead of going into industry?
I like to think I’ve found the best of both worlds: the Helen Hamlyn Centre for Design acts as a consultancy for many of its projects with commercial clients, but also (as part of the Royal College of Art) works as part of many academic research projects (though always with a practical focus). During my first six months here, I’ve worked on commercial projects for new startups and a mobility products manufacturer, as well as two academic research projects. Alongside this job I also do some freelance consultancy in industry, which often involves running workshops on design and behaviour, writing articles, and generating early-stage ideas for companies interested in including a ‘behavioural’ element in their design processes.

There are advantages and disadvantages of academic and industrial work contexts. The freedom to pursue ‘pure’ knowledge (whatever that really means), and indeed more open-ended research, with longer timeframes, is a wonderful aspect of academia, a luxury that most companies cannot really afford given the constraints of the market. However, I found the bureaucracy at both Brunel and the University of Warwick crushingly slow: there was a lot of research that just never got done because the system made sure it took too long, or involved too much paperwork to bother with. That was deeply frustrating, when there are many very good researchers at both institutions who would thrive given a bit more freedom to do things. The RCA (perhaps because it’s so small) is refreshingly fast: it’s possible to decide to try something in the morning and go and do it in the afternoon, or even immediately.

Perhaps also, despite being relatively knowledgeable about behaviour change—one of the biggest buzzwords of the last five years!—I was very reluctant to go straight into a commercial application of the work which has no social benefit. I don’t want to use insights to sell people more things they don’t need, or exploit biases and heuristics to segment and profile consumers to target them with more advertising. I apply John Rawls’s ‘veil of ignorance’ wherever I can: I hate it when advertisers and marketers make assumptions about me, and my likely behaviour, so I don’t particularly want to do that to other people. That rules out a lot of organisations who want people with ‘behaviour change’ credentials.

What do you enjoy the most in your current role? While doing lots of projects is a lot of work, and there’s a tendency for this sort of thing to take over your life, in all honesty this is a very enjoyable job. Meeting lots of different people—members of the public—and actually involving them in the research: designing with them rather than for them, is incredibly satisfying. Also, I think most of the people working for the Helen Hamlyn Centre, because their jobs involve so much research with the public, are actually genuinely nice people.So they’re great to work with.

Do you see any challenges to the wider adoption of decision making psychology in your field? Most designers are not trained in psychology, so there is always a barrier to adoption. There is also the risk that highly popularised approaches and trends, such as what Nudge has become, lose their nuance and the cautious scientific approach when they just become another soundbite or quick-fix ‘solution’, applied to any context without doing any actual user research. And I’m aware that Design with Intent was essentially this, a context-free toolbox of ideas to apply to any situation, and I now see it as a major flaw which needs to be addressed in future versions.

But if I see the DDB/VW Piano Stairs video one more time used as a kind of example universal panacea for deeply complex social problems (“Design can fix anything, just look at how they made taking the stairs fun!!!!”) then I’ll scream, or more likely mumble something grumpily at the back of the room.

How do you see the relationship between academic researchers and practitioners? Design isn’t really an academic subject in itself—it’s a process. I might have a PhD in it, but I’ll be honest and say that it’s lacking in a lot of formal theory. That isn’t a bad thing, necessarily—again, Herbert Simon (in The Sciences of the Artificial) and then Donald Schön (in The Reflective Practitioner) did good jobs of explaining in different ways why it is aqualitatively different approach to knowledge the natural sciences—but what it does mean is that the most interesting and useful research for designers is often not in design at all, but in other fields that overlap. Designers need to be learning from psychologists, anthropologists, social researchers, economists, biologists, and actual practitioners in other fields. It also means there are a lot of design research papers which are basically restatements of the “What is design? What does it mean to be a designer?” question, which are fine but become tiring after a while.

So, to return to the question, academic ‘design’ research is generally very poor at being useful to practitioners. Part of this is the eternal language / framing barrier between academia and practice—there are so many assumptions about terminology and so on which prevent easy engagement—but there is also the access problem. Design consultancies very rarely subscribe to academic journals, and even if they do subscribe to design journals, it’s probably journals from outside the field (see above) that would bring more useful insights anyway. When I did a brief survey on this, these were a few of the points which came up.

What advice would you give to young researchers who might be interested in a career in your field? I would very much like to see more designers drawing on the heuristics work of Gerd Gigerenzer, Peter Todd, et al, and exploring what this means in the context of design for behaviour change and design in general, given that bounded rationality seen as a reality, and essentially adaptive, rather than a ‘defect’ in human decision-making, seems to marry up quite well with the tenets of ethnography and people-centred design. Some people have started to do it, e.g. Yvonne Rogers at UCL, but there is a massive opportunity for some very interesting work here.

Also, consider cybernetics. Read Hugh Dubberly and Paul Pangaro’s work and think about systems more broadly than the disciplinary boundaries within which you may have been educated. In general, read as much as you can, outside of what you think ‘your subject’ is. The most interesting innovations always occur at the boundaries between fields.

More than anything else, work on projects where you do research with real people, in real, everyday life contexts, rather than only in lab studies. It will change how you model behaviour, how you think about people, and how you understand decision making.

Visit Dan’s website: http://architectures.danlockton.co.uk/dan-lockton/

Viewpoint: Why I’m Leaving Academia

fishbowl cropped This week we’re featuring a guest post from Ben Kozary, a PhD candidate at the University of Newcastle in Australia. After getting to know Ben at various conferences over the past year, the InDecision team was disappointed to hear about his decision to leave academia – partly because he’s an excellent and passionate researcher, partly because we wouldn’t benefit from his jovial company at future conferences! However, his reasons for leaving echoed many dinner conversations we’ve had with fellow PhD students so we asked him to write about his experience and his decision to move to industry. Over to Ben…

To say I’ve learnt a lot during my PhD candidature would be an understatement. From a single blank page, I now know more than most people in the world about my particular topic area. I understand the research process: from planning and designing a study; to conducting it; and then writing it up clearly – so that readers may be certain about what I did, how I did it, what I found, and why it’s important. I’ve met a variety of people from around the world, with similar interests and passions to me, and forged close friendships with many of them. And I’ve learnt that academia might well be the best career path in the world. After all, you get to choose your own research area; you have flexible working hours; you get to play around with ideas, concepts and data, and make new and often exciting discoveries; and you get to attend conferences (meaning you get to travel extensively, and usually at your employer’s expense), where you can socialise (often at open bars) under the guise of “networking”. Why, then, you might be wondering, would I want to leave all of that behind?

My journey through the PhD program has been fairly typical; I’ve gone through all of the usual stages. I’ve been stressed in the lead-up to (and during) my proposal defence. I’ve had imposter syndrome. And I’ve been worried about being scooped, and/or finding “that paper”, which presents the exact research I’m doing, but does it better than me. But now, as I begin my final year of the four year Australian program, I’m feeling comfortable with, and confident in, the work I’ve produced so far in my dissertation. And yet, I’m also disillusioned – because, for all of its positives, I’ve come to see academia as a broken institution.

That there are problems facing academic research is not news, especially in psychology. Stapel and Smeesters, researcher degrees of freedom and bias, (the lack of) statistical power and precision, the “replication crisis” and “theoretical amnesia”, social and behavioural priming: the list goes on. However, these problems are not altogether removed from one another; in fact, they highlight what I believe is a larger, underlying issue.

Academic research is no longer about a search for the truth

Stapel and Smeesters are two high profile examples of fraud, which represents an extreme exploitation of researcher degrees of freedom. But what makes any researcher “massage” their data? The bias towards publishing only positive results is no doubt a driving force. Does that excuse cases of fraud? Absolutely not. My point, however, is that there are clear pressures on the academic community to “publish or perish”. Consequently, academic research is largely an exercise in career development and promotion, and no longer (if, indeed, it ever was) an objective search for the truth.

For instance, the lack of statistical power evident in our field has been known for more than fifty years, with Cohen (1962) first highlighting the problem, and Rossi (1990) and Maxwell (2004) providing further prompts. Additionally, Cohen (1990; 1994) reminded us of the many issues associated with null-hypothesis significance testing – issues that were raised as far back as 1938 – and yet, it still remains the predominant form of data analysis for experimental researchers in the psychology field. To address these issues, Cohen (1994: 1002) suggested a move to estimation:

“Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. […] Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing – larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. 

Twenty years later, and we’re finally starting to see some changes. Unfortunately, the field now has to suffer the consequences of being slow to change. Even if all our studies were powered at the conventional level of 80% (Cohen, 1988; 1992), they would still be imprecise; that is, the width of their 95% confidence intervals would be approximately ±70% of the point estimate or effect size (Goodman and Berlin, 1994). In practical terms, that means that if we used Cohen’s d as an effect size metric (for the standardised difference between two means), and we found that it was “medium” (that is, d = 0.50), the 95% confidence interval would range from 0.15 to 0.85. This is exactly what Cohen (1994) was talking about when he said the confidence intervals in our field are “so embarrassingly large”: in this case, the interval tells us that we can be 95% confident the true effect size is potentially smaller than “small” (0.20), larger than “large” (0.80), or somewhere in between. Remember, however, that many of the studies in our field are underpowered, which makes the findings even more imprecise than what is illustrated here; that is, the 95% confidence intervals are even wider. And so, I wonder: How many papers have been published in our field in the last twenty years, while we’ve been slow to change? And how many of these papers have reported results at least as meaningless as this example?

I suspect that part of the reason for the slow adoption of estimation techniques is due to the uncertainty they bring to the data. Significance testing is characterised by dichotomous thinking: an effect is either statistically significant or it is not. In other words, significance testing is seen as easier to conduct and analyse, relative to estimation; however, it does not allow for the same degree of clarity in our findings. By reporting confidence intervals (and highlighting uncertainty), we reduce the risk of committing one of the cardinal sins of consumer psychology: overgeneralisation. Furthermore, you may be surprised to learn that estimation is just as easy to conduct as significance testing, and even easier to report (because you can extrapolate greater meaning from your results).

Replication versus theoretical development

When you consider the lack of precision in our field, in conjunction with the magnitude of the problems of researcher degrees of freedom and publication bias, is it any wonder that so many replication attempts are unsuccessful? The issue of failed replications is then compounded further by the lack of theoretical development that takes place in our discipline, which creates additional problems. The incentive structure upon which the academic institution is situated implies that success (in the form of promotion and grants) comes to those who publish a high number of high quality papers (as determined by the journal in which they are published). As a result, we have a discipline that lacks both internal and external relevance, due to the multitude of standalone empirical findings that fail to address the full scope of consumer behaviour (Pham, 2013). In that sense, it seems to me that replication is at odds with theoretical development, when, in fact, the two should be working in tandem; that is, replication should guide theoretical development.

Over time, some of you may have observed (as I have) that single papers are now expected to “do more”. Papers will regularly report four or more experiments, in which they will identify an effect; perform a direct and/or conceptual replication; identify moderators and/or mediators and/or boundary conditions; and rule out alternative process accounts. I have heard criticism directed at this approach, usually from fellow PhD candidates, that there is an unfair expectation on the new generation of researchers to do more work to achieve what the previous generation did. In other words, that the seminal/classic papers in the field, upon which now-senior academics were awarded tenure, do less than what emerging and early career researchers are currently expected to do in their papers. I do not share this view that there is an issue of hypocrisy; rather, my criticism is that as the expectation that papers “do more” has grown, there is now less incentive for academics to engage in theoretical development. The “flashy” research is what gets noticed and, in turn, what gets its author(s) promoted and wins them grants. Why, then, would anyone waste their time trying to further develop an area of work that someone else has already covered so thoroughly – especially when, if you fail to replicate their basic effect, you will find it extremely difficult to publish in a flagship journal (where the “flashiest” research appears)?

This observation also begs the question: where has this expectation that papers “do more” come from? As other scientific fields (particularly the hard sciences) have reported more breakthroughs over time, I suspect that psychology has desired to keep up. The mind, however, in its intangibility, is too complex to allow for regular breakthroughs; there are simply too many variables that can come into effect, especially when behaviour is also brought into the equation. Such an issue is highlighted no more clearly than in the case of behavioural priming. Yet, with the development of a general theory of priming, researchers can target their efforts at identifying the varied and complex “unknown moderators” of the phenomenon and, in turn, design experiments that are more likely to replicate (Cesario, 2014). Consequently, the expectation for single papers to thoroughly explain an entire process is removed – and our replications can then do what they’re supposed to: enhance precision and uncover truth.

The system is broken

The psychology field seems resistant to regressing to simpler papers that take the time to develop theory, and contribute to knowledge in a cumulative fashion. Reviewers continue to request additional experiments, rather than to demand greater clarity from reported studies (for example, in the form of effect sizes and confidence intervals), and/or to encourage further theoretical development. Put simply, there is an implicit assumption that papers need to be “determining” when, in fact, they should be “contributing”. As Cumming (2014: 23) argues, it is important that a study “be considered alongside any comparable past studies and with the assumption that future studies will build on its contribution.”

In that regard, it would seem that the editorial/publication process is arguably the larger, underlying issue contributing (predominantly, though not necessarily solely) to the many problems afflicting academic research in psychology. But what is driving this issue? Could it be that the peer review process, which seems fantastic in theory, doesn’t work in practice? I believe that is certainly a possibility.

Something else I’ve come to learn throughout my PhD journey is that successful academic research requires mastery of several skills: you need to be able to plan your time; communicate your ideas clearly; think critically; explore issues from a “big picture” or macro perspective, as well as at the micro level; undertake conceptual development; design and execute studies; and be proficient at statistical analysis (assuming, of course, that you’re not an interpretive researcher). Interestingly, William Shockley, way back in 1957, posited that producing a piece of research involves clearing eight specific hurdles – and that these hurdles are essentially all equal. In other words, successful research calls for a researcher to be adept at each stage of the research process. However, in reality, it is often that the case that we are very adept (sometimes exceptional) at a few aspects, and merely satisfactory at others. The aim of the peer review process is to correct or otherwise improve the areas we are less adept at, which should – theoretically – result in a strong (sometimes exceptional) piece of research. Multiple reviewers evaluate a manuscript in an attempt to overcome these individual shortfalls; yet, look at the state of the discipline! The peer review process is clearly not working.

I’m not advocating abandoning the peer review process; I believe it is one of the cornerstones of scientific progress. What I am proposing, however, is for an adjustment to the system – and I’m not the first to do so. What if we, as has been suggested, move to a system of pre-registration? What if credit for publications in such a system were two-fold, with some going towards the conceptual development (resulting in the registered study), and some going towards the analysis and write-up? Such a system naturally lends itself to specialisation, so, what if we expected less of our researchers? That is, what if we were free to focus on those aspects of research that we’re good at (whether that’s, for example, conceptual development or data analysis), leaving our shortfalls to other researchers? What if the peer review process became specialised, with experts in the literature reviewing the proposed studies, and experts in data analysis reviewing the completed studies? This system also lends itself to collaboration and, therefore, to further skill development, because the experts in a particular aspect of research are well-recognised. The PhD process would remain more or less the same under this system, as it would allow emerging researchers to identify – honestly – their research strengths and weaknesses, before specialising after they complete grad school. There are, no doubt, issues with this proposal that I have not thought of, but to me, it suggests a stronger and more effective peer review process than the current one.

A recipe for change

Unfortunately, I don’t believe these issues that I’ve outlined are going to change – at least not in a hurry, if the slow adoption of estimation techniques is anything to go by. For that reason, when I finish my PhD later this year, I will be leaving academia to pursue a career in market research, where obtaining truth from the data to deliver actionable insights to clients is of the utmost importance. Some may view this decision as synonymous with giving up, but it’s not a choice I’ve made lightly; I simply feel as though I have the opportunity to pursue a more meaningful career in research outside of academia – and I’m very much looking forward to the opportunities and challenges that lay ahead for me in industry.

For those who choose to remain in academia, it is your responsibility to promote positive change; that responsibility does not rest solely on the journals. It has been suggested that researchers boycott the flagship journals if they don’t agree with their policies – but that is really only an option for tenured professors, unless you’re willing to risk career self-sabotage (which, I’m betting, most emerging and early career researchers are not). The push for change, therefore, needs to come predominantly (though not solely) from senior academics, in two ways: 1) in research training, as advisors and supervisors of PhDs and post-docs; and 2) as reviewers for journals, and members of editorial boards. Furthermore, universities should offer greater support to their academics, to enable them to take the time to produce higher quality research that strives to discover the truth. Grant committees, also, may need to re-evaluate their criteria for awarding research grants, and focus more on quality and meaningful research, as opposed to research that is “flashy” and/or “more newsworthy”. And the next generation of academics (that is, the emerging and early career researchers) should familiarise themselves with these issues, so that they may make up their own minds about where they stand, how they feel, and how best to move forward; the future of the academic institution is, after all, in their hands.

 

Outside The Matrix: Paul Picciano

pmp.headshotIn our first 2014 Outside The Matrix interview  we meet Paul Picciano who is a Senior Human-Systems Engineer at Aptima, Inc., a leading human-centered engineering firm based near Boston, MA. At Aptima, he applies a diverse set of cognitive engineering methods to improve human performance in the military, intelligence community, air traffic management, and health care. His approach to supporting humans operating in complex environments leverages system design and training to enhance decision processes. Dr. Picciano earned a Ph.D. in Cognitive and Neural Science from the University of Utah, a M.S. in Human Factors and Ergonomics from San Jose State University, and a B.S. in Mechanical Engineering from Tufts University.

Dr. Picciano was also one of the speakers at the InDecision dinner for young researchers organised at the recent Society for Judgment and Decision Making conference in Toronto. 

Tell me about your work: how does decision making psychology fit in it? Most of the work we do involves human operators that must collect data from the environment, analyze and make sense of the input, and select and execute a course of action. The conditions under which they work typically involve uncertainty and time pressure, modulated by goals, objectives, and priorities that change over time.

My favorite part of the job is getting out there and observing and interacting with the experts (and sometimes novices), performing their craft. This has garnered provided access to operating rooms, air traffic control towers, Navy ships, and various command centers for organizations ranging from the Air Force to the CDC. When it’s time to run a more controlled study, there is great access to high fidelity simulators at some of the top government and academic labs.

At Aptima, psychology plays a large part in much of our work.  We provide services such as training, organizational analysis, and system design, by employing practitioners from industrial/organizational, cognitive, and neural disciplines across our portfolio. Most of my work is rooted in cognitive science, looking at perception, attention, and decision making as a mechanism for behavior and resultant task performance. It’s critical to understand how people process information. Empirical findings continue to demonstrate the magnitude of the influence of environments and decision architectures on the human operator in all domains.  Many operators confront stressful situations, data overload, and conflicting objectives, so having a grasp of these psychological aspects help us design more accommodative systems and better training programs to prepare them. But of course, we don’t always get it exactly right…

Why you decide to go into industry instead of continuing in academia? I was in industry before I went to graduate school – I worked for five years after college, and thought I would just go back for an MS and return to the workforce. Plans changed when I realized how much I enjoyed being back in school and doing applied research (at NASA Ames). I found Aptima during this time and was tempted to leave, but  I decided to continue school.
One might ask why I didn’t change my target over the next few years. First, I was committed to completing the PhD program. Second, I continued to be enamored with the academic environment. It is a great opportunity to interact with bright colleagues and an energetic student population with the benefits of a flexible schedule. I was even able to coach lacrosse in grad school and that may have been an option if I had chosen to work on campus long term.
However, I really enjoy the diversity a consulting role provides, interacting with customers in a wide range of domains and problems. I believed industry would provide me more of those experiences and greater opportunity to travel to see different types of operations. I was also very fortunate to find advisors that supported my path away from academia.

How did you first become interested in decision making psychology? Psychologists run such clever experiments. That’s probably what hooked me. The experimental designs and results from people like Milgram, Festinger, Tversky & Kahneman, Loftus, and Ariely are not just fascinating, they’re also actionable. Designers of systems, policies, and organizational structures can leverage these finding to make things better.

I view so much of behavior as a result of decision making – whether it be implicit or explicit, automatic or deliberate, intuition or reason mechanisms as the driving force. Even at the perceptual and instantaneous level, these reactions I still see as decision making. In the heart of the NFL playoffs now, the analysts always talk about quarterback decision making. These are trained, perceptually-driven, goal-directed actions that are dictated by the environment, expectations and training. Similarly, coaches are making decisions on fourth down and general managers are making draft decisions. For all of these decision types, there is a great deal in the scientific literature that could improve these decision processes (if any NFL owners are reading this I can make myself available for a consulting gig!)

What type of research do you find most interesting, useful or exciting? In my opinion, the most valid research emerges when we have the opportunity to marshal a diversity of research techniques that includes observations in naturalistic settings, high fidelity simulations, and tightly controlled and focused research settings. Converging evidence from these perspectives offers the best opportunity to build a strong case for your findings. However, rarely can we pull all of that off in a single project. There usually are not enough resources to cover the problem space to this degree (the government labs seem to more often have the time and funding for such investigations). It’s pretty impressive how realistic well-crafted simulations can feel to participants. We have been able to make senior physicians and air traffic controllers break into a sweat even though no human lives were ever at risk.

One of my most exhilarating days of “research” involved observing the training procedures for landing U2 aircraft. The U2 has a long nose making it difficult for pilots to see the ground. The training method involves other pilots on the ground guiding the aircraft down by calling out the number of feet the jet is above the ground just prior to touching down (“15ft…10ft…8ft…etc.). These callouts come from fellow pilots in zippy little sports cars waiting for the U2 to pass overhead and then chase it down the runway at over 100mph. I was fortunate enough to ride shotgun in one of two chase cars that followed down the runway, in formation, close enough to make accurate distance calls between the landing gear and the runway.

Do you see any challenges to the wider adoption of decision making psychology in your field? There are always challenges; one constantly in need of solutions is that of establishing useful, collectible measures. Part of this requirement stems from the responsibility of presenting a strong return on investment (ROI) argument. In research and development, technology often grabs attention and funding.  It is compelling when a company makes a battery that is small and has longer life – that’s justified spending. It’s more difficult to convince a sponsor that you have improved the decision making process for a group of analysts. The bright side is the military is responsive to decision making research. There are specific programs (and funding) in place for efforts such as training small unit leaders and building decision support elements for tasks including weapons deployment, intelligence analysis, and air traffic management.

How do you see the relationship between academic researchers and practitioners? I think the classic model is that academia is doing the ”basic science” and practitioners are applying that science, to real world problems. I believe it is much more that. We have great partnerships with universities on many active projects, and they are involved in the full range of project activities. They are more than just a place to run first year psych students through a basic experiment.  They are great thought partners and often the first to have produced or read about a new study. Many academics have security clearances, and many are consulting on the side. This makes it easy to engage them on a few levels beyond traditional roles. I also believe that practitioners can help develop new problems of interest for academics to investigate. We really enjoy our interactions with academia.

What advice would you give to young researchers who might be interested in a career in your field? Don’t be afraid to shape your own future. Figure out what you really like to do. Find companies and people that are doing that type of work and engage them. Don’t be frustrated by the fact that your keyword search returns 0 matching job titles. This is a growing field, and most people don’t know much about it. Tell them about it. Show them how you can be useful. If you can help them understand or even predict (with some accuracy) the decisions that will be made by their clients, staff, or management, you can be useful to them. Show that you can help them design choice architectures in their favor, impacting their bottom line, or contribute to community improvements-it will be hard to ignore you.

In my job search, I looked for companies, not job titles or employment ads. Go to conferences and interact with as many people. They won’t all help you, but many are willing. Build your network. There is so much going on out there, so many roles that we don’t even know about. Get yourself out there so you can stumble upon it.

Paul’s profile on Aptima website (incl. publications)

Outside The Matrix: Florian Bauer

Bauer-Florian_Druckvorlage
Following on from Kiki Koutmeridou, we’ll continue this week with another Outside the Matrix interview: Florian Bauer from Vocatus AG in Germany, who studied psychology and economics at the Technical University in Darmstadt, at MIT, and at Harvard University. He has devoted himself to research into behavioural economics and the psychology of pricing, which were also the subject of his doctorate (“the psychology of price structure”). Starting his career as a strategy consultant at Booz, Allen & Hamilton 1996, he joined with two colleagues in founding Vocatus AG (a full-service market research and consulting company) in Munich in 1999. He’s also a member of the board of the German Market Research Association (BVM), and regularly teaches as a visiting professor at several universities in Germany. In 2005 and 2010 he won the ‘German Market Research Award‘ for the ‘Study of the Year‘ and in 2010 the ‘Best Methodological Paper Award’ at the ESOMAR Congress (global market research conference), and has subsequently won the the ESOMAR “Research Effectiveness Award” both in 2012 and 2013. 

Tell us about your work: how does decision making psychology fit in it? Well, all I do is in fact decision making research. I see market research as nothing else than trying to understand the basic building block of an economy – the customers decision making process. And here, there is no better theoretical and methodological basis than behavioral economics even though this is often neglected in classic market research approaches.

Why you decide to go into industry instead of continuing in academia? Well, it was primarily “anticipation of regret”. I had a hard time deciding which path to follow. The reason why I picked business was for one part the idea that I could regret it later not having taken the chance to start my own company. For the other part it was the fact that I really wanted to apply the stuff I was doing and put it to test in the real world. Still today, this a thrill to me.

What do you enjoy the most in your current role?  Do you see any challenges to the wider adoption of decision making psychology in your field? I love that I can do what I like most: Focusing on applying behavioral economics in marketing in general and pricing in specific. I love that we were able to attract a team of more than 70 colleagues that share the same interest and want to rock the boat. The only challenge I can see is the reluctance to adopt new approaches when the old ones are still massively promoted by large international research agencies. But quite frankly, the solution to this is to seek for more innovative clients that are willing to switch gears and go beyond the classic market research approaches. And that works quite well.

How do you see the relationship between academic researchers and practitioners? I think the perspectives are extremely different although they could profit much more from each other. While academia is focusing on a specific effect, on a specific theory, and the analysis of different ways of looking at the issue, practitioners are focusing a broader array of different questions. Where in the end they have to make a recommendation fast and still good enough.

What advice would you give to young researchers who might be interested in a career in your field? Test and decide, maybe try to do academic and market research in parallel. In any case, find your own way and do not focus on traditional career paths.

Website

Outside The Matrix: Kiki Koutmeridou

Kiki_20130929135455608Third in our series of those who moved into the private sector after completing their PhD in decision making psychology is Kiki Koutmeridou – a behavioural economics researcher within GfK NOP, a global market research agency based in London. She has a background in Psychology (BSc) and Neuroscience (MSc) and she completed her PhD in Cognitive Psychology at City University in 2013 focusing on memory and the strategic processing of retrieval cues. In her role as the head of the Centre for Applied Behavioural Economics, Kiki works in collaboration with City University, GfK and clients trying to explore how behavioural economics can be incorporated in the traditional market research.Since joining GfK NOP London in September 2012, Kiki has introduced behavioural economics theories to numerous research projects which focus on the application of academic findings to real-life situations.

Tell us about your work: how does decision making psychology fit in it? I’m currently the head of the Centre for Applied Behavioural Economics at GfK NoP, part of the GfK Group, an international market research organization. The Centre for Applied Behavioural Economics is a partnership between City University and GfK NoP in an effort to promote applied knowledge in the decision-making field. So, by definition, my work is all about decision-making psychology. I’ve just completed my PhD in cognitive psychology and more specifically in memory. When the opportunity presented itself to explore human decision-making behaviour in an applied setting, I didn’t think twice and have been working at GfK for two years now.

My role at GfK is two-fold. I contribute to the various client research proposals across the company by integrating the academic knowledge on decision-making into the suggested research design. I’m looking into ways in which the client’s research question can be answered via the various theories and findings from the behavioural economics field. For this purpose, I help at all stages of the project (experimental design, client meetings, field work, data analysis, presentations). In addition, I work in unison with several external (academic or not) collaborators to conduct fundamental research promoting applied knowledge of decision-making behaviour. As a consequence, we are in a position to subsequently approach suitable clients, to share our findings with them and to make a proposal that would be in their best interest.

Why you decide to go into industry instead of continuing in academia? Actually, I don’t think I’ve made such a decision. I haven’t excluded one for the other (yet!). Like I said, the Centre for Applied Behavioural Economics is in strong collaboration with City University. I spend a day per week at City University, where I finished my PhD, meeting with academics, discussing potential projects and visiting the library. Being still part of an academic institution gives you opportunities for collaborations, fruitful discussions and knowledge sharing. Being part of the industry gives you the chance to apply all this knowledge in the real world and observe the outcome. I consider I get the best of both worlds.

What do you enjoy the most in your current role? My role is not restricted to market research. On the contrary, I explore ways in which people can make better decisions in a variety of settings (consumer, health, financial etc…). What really thrills me is the opportunity to either apply the academic knowledge in the real world or derive new knowledge from the applied experiments towards this end. This is a two-way street that can change the status quo of how things function. The idea that I can be part of these changes gives meaning to what I do and great satisfaction.

Do you see any challenges to the wider adoption of decision making psychology in your field? While there is great conversational interest about the academic findings and some recognition of their benefits, it can at times be a challenge to encourage clients to move beyond tried and tested approaches. When I first joined the market research industry I was surprised that psychology wasn’t incorporated more in the everyday business. In every meeting about any project, the discussions were ringing bells about possible psychological theories that could be applied. But experimenting is often not on the table. However, Applied Decision-Making or Applied Behavioural Economics if you like, is still at its infancy. The challenge is to provide strong evidence of its benefits. It’s a matter of finding the right people, in the right places that can promote this line of research and highlight the benefits of decision-making psychology and its methods until they become part of the norm.

How do you see the relationship between academic researchers and practitioners? In a word: complementary. Academics and practitioners bring different but equally important elements into the equation. My current role is an example of just that: the academic environment provides new findings, old and new theories and innovative methodologies; businesses offer the opportunity to apply all this to the real world and they can provide large sample sizes (the nemesis of the academic world along with the funding). In addition, practitioners have hands on knowledge of the effects that academics describe. Collaboration between the two can only lead to better formulated, more accurate theories and predictions about human behaviour.

What advice would you give to young researchers who might be interested in a career in your field? The irony is that I’m in need of that advice too as a young researcher myself! However, based on my experience so far I have 3 suggestions

  1. Seize every opportunity as you never know where it might lead. I started working at GfK as a part-time data analyst. If you had asked me back then I wouldn’t be able to foresee my current role.
  2. Be open-minded. Nowadays, the boundaries are hazy and every field can be combined with just about any other. Do not limit your imagination about potential new applications or approaches.
  3. Be confident and proactive. There isn’t one right way of doing things so always voice your opinion. You are not supposed to know everything and quite frankly no one does. Remember that we learn more from our failures than from our successes. The important thing is to keep trying to find the answers and to keep reading around your field of interest. Brain is like a muscle – keep it fit!

Also from GfK NOP: interview with Colin Strong (In The Wild series)

Outside The Matrix: Jolie Martin, Quantitative UX Researcher, Google

jolie martinAfter a long break we return to the Outside the Matrix series with Jolie Martin, a quantitative user experience researcher at Google. She received her PhD in Science, Technology, & Management at Harvard through a joint program between Harvard Business School and Computer Science department, and did post-docs both at Harvard Law School Program on Negotiation as well as the Social and Decision Sciences department at Carnegie Mellon. Prior to joining Google, she was also an Assistant Professor in Strategic Communication at the University of Minnesota.

Tell us about your work: how does decision making psychology fit in it? My title for the last year or so has been Quantitative User Experience Researcher at Google. However cumbersome, all the words are necessary to indicate what I do. Like my colleagues who do “regular” (qualitative) user experience research, my goal is to understand when users successfully satisfy their information needs using Google products. In my case, working on the Search Analysis team, I specifically develop metrics that describe how users interact with features on the Google search results page. The key distinction from other user experience researchers is the data source I draw upon, and as a result the types of analyses I do. Rather than running lab studies or even large online studies through tools like mturk, for the most part I rely on data recorded in logs to tell me how real users behave under natural conditions. The benefit of this approach is massive amounts of data. Nearly everything of interest is significant, sometimes even with very minor tweaks to the product that are imperceptible to the average user. The drawback – although it’s sometimes the fun part – is that I have to draw inferences from behavioral signals about users’ preferences, intentions, and satisfaction.

Judgment and decision making to the rescue! My theoretical background in this field has been extremely helpful in formulating hypotheses about why users search the way they do, from the queries they enter to the sequence of clicks that they take. For example, in considering ways to improve the user experience with exploratory tasks that require large amounts of subjective information (say, choosing where to go on vacation), I need to be mindful of contrasting interpretations of a user’s behavior. If she spends more time and clicks more links, this could be a bad signal that she simply didn’t find the information necessary to make a decision, that she suffered from information overload, or that she was distracted and continued browsing to procrastinate on a more worthwhile task. On the other hand, it could be a good signal that we offered her a rich set of information sources – increasingly tied to her personal characteristics and social networks – that offered insights worth delving into. To tease apart these interpretations requires testing mental and behavioral models of an extremely diverse set of users.

Why you decide to go into industry instead of continuing in academia? Unlike many of my academic colleagues – and even many people I know in industry who jumped ship – I never embarked on a PhD specifically to pursue a career in academia. In fact, I was clueless that this was the expectation of my advisors until several years into my PhD program! I was operating under the assumption that building theoretical knowledge and methodological skills would serve me well in any career. At some point right around my third or fourth year of grad school, I did become somewhat indoctrinated to the notion that academia is the “highest calling” and we should leave the actual implementation of our ideas to others. And of course I realized how difficult it would be to return to academia should I leave, so with this in mind, I gave it the old college + MBA + PhD + 2 post docs + assistant professorship try before finally divesting myself of those sunk costs. I liked each of my academic positions, but often felt as if I was spinning my wheels to achieve an objective (publishing in journals read almost exclusively by other academics) that I didn’t really care about, so when Google contacted me, I figured it couldn’t hurt to interview. During the process, I was surprised to find many other people like me with PhDs and interests in “pure” research. These were very smart people, and all had various personal and professional reasons for leaving academia, but it became clear to me that it was a choice, not necessarily indicating that someone couldn’t make it in academia.

That said, I am a firm believer that people enjoy things that they are good at, and where they can continue improving over time. I thought Google would offer exactly this for me. I have always loved building cool stuff, which is really the core of what we do. At the same time, there would be a lot to learn. When I accepted the offer at Google, I took a one-year leave from my assistant professorship (which was extremely generous of my department chair to offer), and it was nice to have that safety net should I dislike my new job. During the week of orientation with mostly software engineers, I thought more than once that I might need to use it. Just about everything flew over my head. But once I settled in with my teammates, I realized that everyone was willing to help, and no one had all the answers; doing logs analysis from end to end is complex by its very nature, and no one could step into the role as an expert. The expectations of me were that I be persistent and keep asking interesting questions. After a year in my position, the torrent of learning opportunities hasn’t tapered off in the least.

What do you enjoy the most in your current role? The main appeal of my job is the rapid pace that I can have impact on products that improve people’s lives in a tangible way, sometimes just through offering them a whimsical break from a busy life. I love working for a company that takes this mission seriously, and always holds it above monetary factors. Of course, this is not true of every company, so I feel lucky in that regard. I also have a nice variety of projects that result from mutual selection, and work with people in just about every role. There are only about 10 of us across the company in the Quantitative User Experience Researcher position, and our ability to glean insights from large data sets is highly valued by others. There is no prescribed way to perform these analyses, so we have freedom to use novel methods in distributed computing, machine learning, and natural language processing, among others. Last but not least of what makes my work stimulating is the chance to witness the evolution of cutting edge new technologies, such as riding in a self-driving car, wearing Glass, and seeing a prototype of a balloon that may one day provide internet in developing countries. Making these products useful requires not only tech savviness, but also political and legal knowhow.

Do you see any challenges to the wider adoption of decision making psychology in your field? Google and many other large companies are quite receptive to using decision making psychology in some ways. For example, I was involved in a “20% project” (whereby we can spend 20% of our time on something completely unrelated to our job function) running consumer sentiment surveys during the Democratic National Convention and presidential debates. I’m now working on another 20% project that draws upon academic research to test how environmental and informational factors shape food choices in our cafes. Similar studies have been conducted at Google to examine how defaults affect 401K allocations, and programs have been implemented based on the findings, with material effects on employee well-being.

However, for several reasons, there is more resistance to using basic research in the creation of products for end users. First, many companies in the technology industry are comprised mainly of software engineers (at last count, about 75% of Google employees) who may not consider psychology relevant. They often expect that users are “rational” in the sense of taking optimal actions given the set of options and information at their disposal, whereas we know this is rarely the case. Second, what research we do has focused on user response to specific technologies, with little ability to then generalize to a broader set of stimuli or outcome measures. This is related to the fast product development cycle I mentioned previously; we simply don’t have time to test fundamental psychological principles or the product will be launched and onto v2 before we have anything to say about it. This is changing gradually as the value of longer-term focus is realized. Third, while publishing is encouraged, there are not huge incentives to do so, especially given the more rigorous hoops we have to jump through in obtaining approval. Even in cases where we have interesting findings applicable to psychology more broadly, we often can’t disclose them for proprietary or privacy reasons.

How do you see the relationship between academic researchers and practitioners? In my opinion, the ideal relationship between academics and practitioners is one that takes into account the comparative advantages of each. While academics are usually more in touch with trending or provocative research topics that are likely to interest audiences and gain traction, practitioners are more aware of the available data sources and product use cases. Similarly, in terms of resources, academic connections provide legitimacy and wider dissemination of research findings, while those of us in industry can potentially be more useful in supplying funding, a sample population for experiments (be they users or employees), and analysis infrastructure (i.e., computing power). Collaborations would be more synergistic if there was greater engagement in both directions, with academics developing research questions based on real business or social issues, and practitioners making the additional effort to share findings via peer-reviewed conferences and journals.

What advice would you give to young researchers who might be interested in a career in your field? I’d suggest that students contemplating a transition to industry try a temporary or part-time internship; it’s a relatively low risk way to test the waters, and realistically, given the scarcity of professorships at top research universities, your advisors should support your consideration of other options. However, also be aware that one company isn’t going to fully represent all of industry, the same way stepping into a random graduate program or postdoc could be quite different from the one that is the best fit for you. I interned at a hedge fund during grad school and knew pretty quickly that it wasn’t for me, but it was a valuable experience nonetheless.

Perhaps more feasible for faculty members who are dissatisfied with certain aspects of their careers (e.g., working weekends and responding to emails at 3am), consider reaching out to people at companies of interest to you. You will likely find that they are excited to talk to someone with the wherewithal to do in-depth analysis of their users, and may even be open to handing over data or running experiments with you. Ask if you can present at company meetings to get a sense of the culture and style, or invite industry folks to present at your university. And don’t just build your network, but also maintain it by staying in touch with people you’ve worked with in the past. Referrals from a company’s current employees will make a big difference if you decide to apply!

Viewpoint: Why social science grad students make great product managers

Litvak
A couple of months ago we featured Paul Litvak from Google in our Outside the Matrix series. After his interview, his inbox was inundated with questions from readers and he recently wrote a response on his own blog which we thought was so fantastic we wanted to republished it on InDecision as well. So, this week Paul shares his views on why social science grad students make excellent product managers. Note: even if you’re not a grad student yourself, it’s worth reading Paul’s views in case you’re ever in a position to hire one! 

After my interview with InDecision Blog, a number of graduate students emailed asking me about careers in technology (hey, I asked for it). They were a very impressive lot from top universities, but their programming skills varied quite a bit. Some less technically minded folks were looking at careers in technology aside from data scientist. Enough of them asked specifically about product management, so I thought I would combine my answers for others who might be interested.

What does a product manager do?
Brings the donuts. The nice thing about social science grad students for whom reading about product managers is news is that we can skip over the aggrandized misconceptions about product management that many more familiar with the technology space might harbor. The product manager is the person (or persons) that stands at the interface between an engineering team building a product and the outside world (here includes not only the customers/users of the product, but also the other teams within a given company who might be working on related products). The product manager is in charge of protecting the “vision” of the product. Sometimes they come up with that vision, but more often than not, the scope of what the product should be and what features it needs to have today, next week, or next year is something that emerges out of interactions between the engineers, the engineers’ manager, the product manager, company executives, etc etc. The product manager is really just the locus of where that battle plays out. So obviously there is a great need for politicking at times as well.

But wait, there’s more! Once the product is actually launched, it is typically still worked on and improved (or fixed). So the product manager is also the person that gets to figure out how to prioritize the various additional work that could be done. But how do they figure out what needs to be changed or fixed? This is one of the places where research comes in! So someone like me might do analysis on the data of people’s actual usage of the product (the product manager prioritized getting the recording of people’s actions properly instrumented, right? RIGHT?). Or a qualitative researcher might conduct interviews of users in the field and try and abstract an understanding from that. Either way, the product manager has to make sense of all this incoming information and figure out how to allocate resources accordingly.

Why would social science graduate students be good at that?
Perhaps you can see where I’m going with this. Products are increasing in scope. Even a simple app has potentially tens of thousands of users. Quantitative methods are becoming increasingly important for understanding what customers do. In such an environment, being savvy about data is hugely advantageous. In the same way that many product managers benefit from computer science degrees without coding on a daily basis, product managers will benefit from knowing statistics, along with domain expertise in psychology, sociology, anthropology even if they aren’t the ones collecting and analyzing the data themselves. It will help them ask the right questions and to when to trust results, and when to be more skeptical. It will help them operationalize their measures of success more intelligently.

The soft skills of graduate school also translate more nicely. Replace “crazy advisor” with “manager” (hopefully a good one) and replace “fellow graduate students” with “other product managers” and many of the lessons apply. Many graduate social scientists will have plenty of experience with being part of a lab and engaging in large-scale collaborative projects. Just like in graduate school, a typical product manager will spend hours fine tuning slide decks and giving high stakes presentations meant to convince skeptical elders of the merit of a certain course of research (replace with: feature, product, or strategy).

Finally, building technology products is a kind of applied social science. You start with a hypothesis about a problem that people are having that you can solve. Of course, as a social scientist, the typical grad student understands just how fraught this is! Anthropologist readers of James Scott and Jane Jacobs and economists who love their Hayek will have a keen appreciation for spontaneous order (“look! users are using this feature in a totally unexpected way!”), as well as the difficulties of a priori theories of users’ problems or competencies. In fact, careful reading of social science should make a fledging PM pretty skeptical of grand theories. For instance–should interfaces be simpler or more complicated? How efficient should we make it to do some set of common actions? If everything is easily accessible from one click on the front page, will there be overload of too many buttons? Is that simpler or more complicated? These sorts of debates, much like debates about the function of particular social institutions or legal proscriptions, are not easily solved with simple bromides like “less is always better”, or “more clear rules, less discretion” (I am reading Simpler: The Future of Government by Cass Sunstein right now, and he makes this point very well with respect to regulations). The ethos of the empirical social scientist is to look for incremental improvements bringing all of our particularist knowledge to bear on a problem, not to solve everything with one sweeping gesture. This openness is exactly the right mentality for a product manager, in my opinion.

Conclusion
I hope I have at least partially convinced you that as an empirical social scientist, you would make a great product manager. Now the question is, how do I convince someone in technology of that? The short and most truthful answer is, I’m not 100% certain. It might take some work to break into project management, but I see lots of people with humanities background doing it, so it can’t be that hard (One of my favorite Google PMs is an English PhD). One thing I would suggest is carefully framing your resume to emphasize your PM-pertinent skills–things like, group project management, public speaking experience, making high stakes presentations, etc. You might also consider making a small persuasive deck to show as a portfolio example of a situation where you convinced someone of something (your dissertation proposal could work?). This would be a great start. Another thing is consider more junior PM roles initially–as a PhD coming out of grad school you are still going to make a fine salary as an entry-level product manager. If you apply these principles I have no doubt that you will quickly move up.

Read Paul’s original interview here.

Outside The Matrix: Paul Litvak

LitvakPaul Litvak is currently a Quantitative Researcher working on the Google+Platform team to improve people’s social experiences online. Prior to that he was a Data Analyst at Facebook working on fighting fraud, tracking the flow of money and improving customer service. He also has a PhD in Behavioral Decision Research from Carnegie Mellon and his dissertation was on the impact of money on thought and behavior. During graduate school he co-founded a boutique data science consulting firm, the Farsite Group, which is consulting for some of the largest retailers and private equity firms to improve their data-informed decision-making processes. Through these various activities he’s managed to keep a foot in both the academic decision science and business data science worlds for the last 6 years. 

Tell us about your work: how does decision making psychology fit in it? I work at Google as a quantitative user experience researcher–I use quantitative methods to try and understand how people are (or aren’t using) features of Google products with the hopes of recommending ways to improve upon them. Often times this involves running an experiment but can also often involve correlational analyses instead. Sometimes the sample sizes are so large (millions or even billions!) you don’t need to run any statistics at all–you just count the rate at which some event happened.

Decision-making psychology fits into this work in at least three ways. First, in hypothesis generation and testing, knowing which  effects from psychology are relevant in a situation gives you great product intuition. For example, you might be analyzing how users bid on ad space and remind the engineers and designers of how much the anchor matters. Second, it’s useful in designing and conducting good experiments. In online experiments you are always weighing the pros and cons of different operationalizations of user constructs (e.g. what is “engagement” or “satisfaction” in the context of a particular website?). Being able to operationalize a variable intelligently is the difference between an experiment that convinces a Product Manager to change things accordingly and one that is totally ignored. Third, decision science lets you think clearly about analytic problems that come up a lot in software design. Nowadays it is common to use some machine learning algorithm to classify some otherwise messy data. In doing so, it is crucial to be able to think clearly about false positives and false negatives, and tradeoffs between the various costs of being wrong versus not making predictions for some cases. Fundamental statistical reasoning concepts (e.g. Bayes rule) never go out of style!

Why you decide to go into industry instead of continuing in academia? For me, it was a combination of factors. First, for many reasons (some outside my control), my research hadn’t been as successful as was needed to secure a good tenure track job. In order for me to have continued I would have had to have taken a postdoc for some number of years and continue working hard in the hopes that I could get sufficient papers published. I felt some amount of despair over my floundering career. (In retrospect, I’m not sure how overblown that was.)

Also, I had always had some interest in technology and business. I majored in computer science (and philosophy–I contain multitudes!) and had an interest in technology since I was a 10-year-old programming BASIC in my friend’s basement. Meanwhile I had co-founded a boutique statistics consulting group, Farsite (http://farsitegroup.com), that had had some early successes. Through trying to sell a variety of large businesses on consulting services (which I did in between running lab studies for my dissertation) I learned more and more about the business world. We even won a few contracts! More and more, I was enjoying applying the same scientific thinking I was using in research to solve business problems, like where to put pharmacies.

There were also quality of life issues. I wanted to have a life outside my job, and that seemed close to impossible as an academic. I noticed my advisor, who was a young tenure-track faculty, worked like a madman, seemed very stressed and unhappy. (He seems better now, and might dispute my contention that he was unhappy then.) Consequently, when a job opportunity came along to work for Facebook, pre-IPO, in Austin, Texas, where my best friend was living, it was nigh impossible to turn down.

What do you enjoy the most in your current role? By far the thing I enjoy the most about my role is having a large impact on the world. While I worked for Facebook, my analyses and code affected literally millions of dollars of revenue, and helped keep the site clean of a lot of bad content that would have made people’s daily experience much less pleasant. At Google, my research has launched whole product initiatives, determined whether to keep or get rid of product features, and literally affected what millions of people see across all of Google’s products every day. I have a huge amount of flexibility to work on research projects that interest me, in part because I love working on, and am good at formulating impactful research.

Do you see any challenges to the wider adoption of decision making psychology in your field? Yes, there are at least three challenges:

1) Because of disproportionate incentive to produce positive results and an increasing amount of researchers chasing fewer dollars and jobs, I do think the pressure to cut corners has increased significantly. This is impacting the quality of research that is being produced. Not just in terms of replicability and p-hacking, but also in terms of theoretical comprehensiveness. I read a lot of papers and I can’t help but feel like decision science isn’t very cumulative. Most researchers are chasing individual findings instead of trying to integrate our understanding of decision-making into a cohesive model or theory. It feels like it’s stagnated a bit to me–the best papers I read were written in the 70s, 80s, and 90s. I think the grab-bag nature of our findings makes it difficult to know which findings to apply in a given new context.

2) Another related problem is interactions. Social scientists uncover many many effects, but in real life many different effects could be active at the same time. It’s hard to know if all these effects should be additive, or what will win out when certain psychological antecedents suggest opposite effects. Perhaps more experiments at large scale can help this.

3) A third problem is entrenched attitudes toward experiments. I’ve definitely seen companies and executives resistant to the idea of running experiments. Sometimes they are worried about what will happen if the press finds a weird version of a product or feature. Sometimes they object to a lack of uniformity and vision in a product offering. Sometimes they are just ignorant about statistics, and have basic skepticism about generalizability and research. I’m happy to say that I think this has changed a great deal over the last 5 years. Nate Silver has done some good work in this area.  🙂

How do you see the relationship between academic researchers and practitioners? I see the relationship as fundamentally symbiotic.

Academics help practitioners in at least 4 ways (even setting aside direct collaboration, which is quite common nowadays): creating new methods, discovering findings in the lab that can then be applied, creating new theories from which to base products on (e.g. Goffman’s work on self presentation and different identities could affect the sharing model in social networks), and giving a sense of context and history. The last one is particularly important for various techno-utopists out there who think that they can use technology to fundamentally alter social relations without considering the results of previous attempts to do just that.

Practitioners help academics as well; they provide lots of data and invent useful technology. Have decision scientists and psychologists started thinking yet about what Google Glass will do to transform research? Imagine field studies were you could record what the subject is seeing when they make their choice? Or think about what the second screen could offer in terms of real time experience sampling or extra information to alter a choice. The possibilities are endless. Finally, and most obviously, practitioners often have access to lots of money… which is helpful, I’m told.

What advice would you give to young researchers who might be interested in a career in your field? Three things:

1. Come talk to me. 🙂

2. Learn some programming. R, then SQL, then Python, or some other scripting language. The more programming you learn the higher up the food chain you can go. If you know a lot of programming, you aren’t limited by what data exists, but only by what data you can create. This is hugely empowering, and increases your impact considerably. However, if all you learn is R, that is still incredibly useful,and will still get you into a variety of jobs.

3. Be curious! So many useful insights come from a broader curiosity about the world. This applies to both academic and worldly knowledge. Very random papers have led me to business/product insights. Similarly, keeping curious about what’s going on in the world is what enabled me to get into technology in the first place. Keep learning!

Want to read more? Try these…