Guest post: Michael Blastland on Uncertainty

michael blastlandThis week we have a guest post from journalist, broadcaster and author Michael Blastland. In addition to creating the BBC 4 Radio programme ‘More or Less’, he has authored several books including The Tiger That Isn’t (published in the US as The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life) and The Only Boy in the World, about his son’s autism. He is a well-known campaigner for statistical literacy. His most recent book, The Norm Chronicles: Stories and numbers about danger, looks at the risks of everyday life and how to decode them. 

People tend not to like uncertainty. It’s confusing. It makes our choices riskier. What are we supposed to do when we’re not sure what’s going on?

No, if it can be nailed down, nail it. If it can be settled, sort it. And even if it can’t, maybe any answer is better than none. Faced with the stranger on the moor who says the true path is definitely this way, or the one who says ‘not sure, maybe over there somewhere,’ which do you choose?

For the stranger on the moor substitute the political leader, or the business leader. We like people who seem to know.

Then, a few weeks ago an old friend, Oli Hawkins, said he’d had an idea.  

Understatement.

What’s more, it was an idea about how to show the uncertainty in data.

Hazardous understatement.

More accurately, it was an idea about how to bring uncertainty to life so that we see its full extent and implications.

And I thought: this is brilliant; some people will hate it.

What I think Oli had done was to find a way of making statistical doubt more visible. This is no small trick. In doing so, he might have helped us see the world differently. But there’s also little doubt that it makes life less comfortable.

The nub of the problem he has been trying to overcome is, in a word, pictures.

I agree, that doesn’t sound like a problem. In fact, pictures are often the answer to the problem of how to interpret data. They can crystalize ideas and make vagueness vivid. Turned into pictures, numbers escape the fog of evidence for the blue sky of clarity. We take in so much more from a picture than from columns of data, we spot patterns, faster, we remember the picture, it can even be beautiful.

As with a character in film compared with a character in a novel, the wry smile and the twinkle in the eye is given settled form. For some of us, it’s hard to stop thinking that James Bond is Sean Connery.

‘So?’ you say. ‘What’s wrong with that? Isn’t this exactly what visualisation strives to do.’ Well, sometimes there’s nothing wrong at all. Sometimes it’s fab.

And sometimes it’s fantasy. Especially when the ideas themselves ooze doubt, when vagueness and uncertainty might be half the point, when the numbers are more mush than concrete.

I’m a huge fan of visualisation. Who isn’t? But uncertainty is visualisation’s portrait in the attic: a dodgy secret, an orthogonal truth, in keeping with the human tendency to avoid it.

How to say that the line is most likely here, doing this, but could be way over there doing that? This has never, in my view, been satisfactorily sorted. The understandable tendency of a lot of data-viz is to ignore it.

On those occasions uncertainty is acknowledged, a standard approach is the error bar. Here’s an example from Oli’s discussion of the problem:

blastland 1

‘The margin of error’ he says ‘reflects the 95% confidence interval for the estimate, which means there is a 95% chance that the actual value is within the range shown by the error bar and a 5% chance that it is outside this range. The size of the error bar is determined by the size of the sample on which the estimate is based.’

But as Oli points out, the error bars simply follow the trend.

They move up and down in a neat little dance either side of the central estimate, and our eyes follow, as if all estimates dance in the same direction. In fact, the true value might lie at any point along those error bars, or beyond, though with diminishing probability. That is, the true value could be at the top of one error bar and the bottom of the next. So this visualisation – improvement though it is on a plain bar chart – arguably obscures the potential movement.

Another example is the Bank of England’s fan charts for GDP, which apply both to future estimates and, more to the point here, to GDP in the past, about which we also remain uncertain. These fan charts show a range of estimates of the true value, in bands of probability.

They’re good. I like them. But they have exactly the same problem. All estimates echo the central line and visually reinforce our impression of the trend. Not the idea at all.

blastland 2

What we tend to ‘see’ in this chart, I think, is a rise and then a fall in the rate of growth in the past few years that might have happened higher or lower than the central estimate, but was basically in lockstep with it. And people draw all sorts of conclusions from that supposed trend about the conduct of economic policy.

But is it true? Because what could have happened is that the rate of GDP growth rose continually since 2009, as it swung from the bottom to the top of the Bank’s range of estimates. Rather than an economy that skirted double or even triple-dip recession, maybe we had an economy going from strength to strength for more than three years. Or maybe it was the other way round and we recovered spectacularly in late 2009 and then slammed into reverse and another shallow but protracted recession.

You’ll find little economic comment to this effect, and it’s not the Bank’s nor the ONS’s best guess, but it is perfectly within what the Bank thinks are reasonable bounds of uncertainty. Maybe one reason this discussion doesn’t happen, and the doubts tend to be smothered in the rush to an appalled/euphoric (delete as applicable) reaction, is because we don’t have the right way of showing their extent.

And fan charts like these are a relatively recent innovation. Before them, the lines were even more concrete.

There are other techniques for representing uncertainty. Howard Wainer’s ‘Picturing the Uncertain World’ is an interesting exploration of the subject. But we can, and should do more.

‘You know…’ I say, trying to inspire audiences of designers, ‘you have an opportunity here to work out how to use visual techniques to bring uncertainty properly to life. Do that, and you could help people see, maybe for the first time, the way that statistical evidence relates to real events. This could change the way we see the world.’

But if that sounds too much like hard work, well then, as I’ve put it elsewhere, we can always carry on with the same old statistical blah… only prettier. As Tim Harford has said, mis-information can be beautiful too.

My own attempt at the uncertainty problem was to make some fantasy league tables in which the position of each imagined school, or hospital, or whatever, bounced up and down randomly within the confidence intervals, moving up and down all over the shop. Who really ranked where? You couldn’t be sure. Which is irritating, but often as it should be.

But how to make this movement proportionate to the real probabilities? Cue Oli. He has found a way http://olihawkins.com/visualisation/1 to animate the estimates within the confidence intervals so that they pop up just as often as probability suggests they should – given the data. He shows that this can be done with interval data so that we discover how different a trend might look over time, as well as with categorical data – like the school league-table example. He’s done it as a series of snapshots rather than a continually fluid movement, which helps pick out more clearly what the true trend might have been.

And…? Isn’t all this obvious? If that’s what you think, you’d be right in the sense that it is all implied by the existing maths of confidence intervals.

The answer may be that all that is new here is the articulation of an idea. And it may be true that the idea is already latent in the prior concept of confidence intervals. So what’s the big deal?

The big deal for me is that an idea that is latent – except in the minds of a few – isn’t an idea at all for the many. Articulating it is every bit as important as knowing it. I would say that, being in the communication business. But maybe the proof of how important it is to articulate these things, and also the proof of how well it’s been done to date, is how little there is in public argument about the extent of the uncertainty around numbers like these or what that uncertainty implies. If the idea is obvious, where’s it been?

Now you could just put that absence down to the ignorance of the commentariat and politicians, or you could add that maybe we could do it differently.

The acid test is what we see with the new method. Applied to the migration data, the effect is electric. Here are a few grabs from Oli’s visualisation as it runs through the variety of stories that could have been told.

Like this one…

blastland 3

Fairly flat, bit of a crest around 2010 maybe, maybe a hint of a rising trend – though this could be no more than a couple of weird years. Nothing to my eye leaps off the page over the long run.

Or like this.

blastland 4

Which looks pretty clearly like a step change in 2004. The numbers roughly double. A good one for those who want to say we ‘lost control of the borders’ and a sharply different reading of history.

Or what about this?

blastland 5

In which the key date moves back six years as we see a broadly rising trend all the way until about 2010, when ‘determined action by the Coalition finally brought it under control,’ presumably.

Or like this, when determined action by the Coalition since 2010 made hardly any difference.

blastland 6

Just click and play to see the variety of stories that could be true. The implications of the uncertainty are easier to grasp and harder to ignore. What also emerges is that some stories are more common and consistent than others. Very few iterations show 2012 higher than 2010 for example. So we see both what is most uncertain, and what is most likely. It’s not at all the case that the upshot of all this is to throw up our hands and say we’re clueless about what happened.

Not new? It’s revelatory. What if we did it to the GDP lines on the Bank of England’s fan chart, and animated them through a range of possible stories in all their top-to-bottom potentially volatile variety? What if we did the same to the monthly unemployment data?

Yes, it’s disturbing, destabilising, unsatisfactory in so many ways. It makes the world less nailable, less sorted. And I love it.

What’s especially thought provoking is that it makes you wonder how many more techniques there might be that could bring life to statistical insights, rather than bringing design or false clarity to dodgy data.

Don’t get me wrong. I think there’s some fantastic stuff out there. And anyway, uncertainty isn’t always a big factor. All the same, data visualisation is no more than a fancy distraction if it doesn’t help us see better. But when it does…  wow.

Norm Chronicles interactive site

Profile in the Guadian

Viewpoint: Why social science grad students make great product managers

Litvak
A couple of months ago we featured Paul Litvak from Google in our Outside the Matrix series. After his interview, his inbox was inundated with questions from readers and he recently wrote a response on his own blog which we thought was so fantastic we wanted to republished it on InDecision as well. So, this week Paul shares his views on why social science grad students make excellent product managers. Note: even if you’re not a grad student yourself, it’s worth reading Paul’s views in case you’re ever in a position to hire one! 

After my interview with InDecision Blog, a number of graduate students emailed asking me about careers in technology (hey, I asked for it). They were a very impressive lot from top universities, but their programming skills varied quite a bit. Some less technically minded folks were looking at careers in technology aside from data scientist. Enough of them asked specifically about product management, so I thought I would combine my answers for others who might be interested.

What does a product manager do?
Brings the donuts. The nice thing about social science grad students for whom reading about product managers is news is that we can skip over the aggrandized misconceptions about product management that many more familiar with the technology space might harbor. The product manager is the person (or persons) that stands at the interface between an engineering team building a product and the outside world (here includes not only the customers/users of the product, but also the other teams within a given company who might be working on related products). The product manager is in charge of protecting the “vision” of the product. Sometimes they come up with that vision, but more often than not, the scope of what the product should be and what features it needs to have today, next week, or next year is something that emerges out of interactions between the engineers, the engineers’ manager, the product manager, company executives, etc etc. The product manager is really just the locus of where that battle plays out. So obviously there is a great need for politicking at times as well.

But wait, there’s more! Once the product is actually launched, it is typically still worked on and improved (or fixed). So the product manager is also the person that gets to figure out how to prioritize the various additional work that could be done. But how do they figure out what needs to be changed or fixed? This is one of the places where research comes in! So someone like me might do analysis on the data of people’s actual usage of the product (the product manager prioritized getting the recording of people’s actions properly instrumented, right? RIGHT?). Or a qualitative researcher might conduct interviews of users in the field and try and abstract an understanding from that. Either way, the product manager has to make sense of all this incoming information and figure out how to allocate resources accordingly.

Why would social science graduate students be good at that?
Perhaps you can see where I’m going with this. Products are increasing in scope. Even a simple app has potentially tens of thousands of users. Quantitative methods are becoming increasingly important for understanding what customers do. In such an environment, being savvy about data is hugely advantageous. In the same way that many product managers benefit from computer science degrees without coding on a daily basis, product managers will benefit from knowing statistics, along with domain expertise in psychology, sociology, anthropology even if they aren’t the ones collecting and analyzing the data themselves. It will help them ask the right questions and to when to trust results, and when to be more skeptical. It will help them operationalize their measures of success more intelligently.

The soft skills of graduate school also translate more nicely. Replace “crazy advisor” with “manager” (hopefully a good one) and replace “fellow graduate students” with “other product managers” and many of the lessons apply. Many graduate social scientists will have plenty of experience with being part of a lab and engaging in large-scale collaborative projects. Just like in graduate school, a typical product manager will spend hours fine tuning slide decks and giving high stakes presentations meant to convince skeptical elders of the merit of a certain course of research (replace with: feature, product, or strategy).

Finally, building technology products is a kind of applied social science. You start with a hypothesis about a problem that people are having that you can solve. Of course, as a social scientist, the typical grad student understands just how fraught this is! Anthropologist readers of James Scott and Jane Jacobs and economists who love their Hayek will have a keen appreciation for spontaneous order (“look! users are using this feature in a totally unexpected way!”), as well as the difficulties of a priori theories of users’ problems or competencies. In fact, careful reading of social science should make a fledging PM pretty skeptical of grand theories. For instance–should interfaces be simpler or more complicated? How efficient should we make it to do some set of common actions? If everything is easily accessible from one click on the front page, will there be overload of too many buttons? Is that simpler or more complicated? These sorts of debates, much like debates about the function of particular social institutions or legal proscriptions, are not easily solved with simple bromides like “less is always better”, or “more clear rules, less discretion” (I am reading Simpler: The Future of Government by Cass Sunstein right now, and he makes this point very well with respect to regulations). The ethos of the empirical social scientist is to look for incremental improvements bringing all of our particularist knowledge to bear on a problem, not to solve everything with one sweeping gesture. This openness is exactly the right mentality for a product manager, in my opinion.

Conclusion
I hope I have at least partially convinced you that as an empirical social scientist, you would make a great product manager. Now the question is, how do I convince someone in technology of that? The short and most truthful answer is, I’m not 100% certain. It might take some work to break into project management, but I see lots of people with humanities background doing it, so it can’t be that hard (One of my favorite Google PMs is an English PhD). One thing I would suggest is carefully framing your resume to emphasize your PM-pertinent skills–things like, group project management, public speaking experience, making high stakes presentations, etc. You might also consider making a small persuasive deck to show as a portfolio example of a situation where you convinced someone of something (your dissertation proposal could work?). This would be a great start. Another thing is consider more junior PM roles initially–as a PhD coming out of grad school you are still going to make a fine salary as an entry-level product manager. If you apply these principles I have no doubt that you will quickly move up.

Read Paul’s original interview here.

Guest post: Why we should talk to the media (part 2)

claudia hammondFollowing the guest post from Lisa Munoz at SPSP, this week we hear from Claudia Hammond, an award-winning presenter, writer and psychology lecturer, on why she thinks young researchers in particular should engage more with the media. In addition to being part-time faculty at Boston University’s London base, she presents All in the Mind on BBC Radio 4 and Health Check on BBC World Service, and is the author of “Time Warped: Unlocking the mysteries of Time Perception“.

In my many years hosting radio programmes for the BBC I have interviewed dozens of young researchers. When a new paper comes out, we like wherever possible to talk to the people who have done the research themselves instead of relying on a commentator. From our perspective it makes science programmes sound better, providing listeners with a direct link to the scientists.

So that’s what’s in it for us as broadcasters, but why should researchers bother? Scientists can sometimes feel frustrated that their work is misunderstood by the general public. Science literacy varies massively amongst the population and speaking about your work to the media can help to demystify it. All my work, whether making radio or TV programmes or writing books aims to make science accessible and to increase the understanding of the importance of evidence. Researchers themselves have a vital part to play in this and by doing interviews they can reach vast numbers of people in one go. BBC World Service where I host a weekly programme featuring newly-published health and medical research, has 44 million listeners.  If the public is to be expected to continue to fund scientific research they have a right to know how their money is being spent and this is a great way for scientists to get their message across about the importance of their research.

There can also be advantages for researchers themselves. Increasingly grant-giving bodies are making public engagement a condition of their grants, so it’s as well to start practising early in your career.

I also know of many situations where appearing in the media has led to new research collaborations. People often get in touch with me after they hear an interview wanting to contact someone they heard on my programme. When we have studio discussions the participants often exchange emails afterwards so that they can work together. I’m surprised at how often they are unaware of other researchers in the same field. It sometimes feels like a researcher dating agency, but it’s good to see people sharing their ideas.

Sometimes researchers worry about what their peers will think and to be honest, the secret to doing to a good interview is to imagine you are explaining it to a non-scientific friend. Don’t imagine your supervisor by your shoulder. You know your own work inside out and you can explain it. You’re not going to be asked questions which are so in-depth that you can’t answer them, but it’s important to consider the context of your research. Has this topic been in the news recently? How big is the problem that your research addresses?With a little preparation before an interview, you can have an impact.

Website

Guest post: Why we should talk to the media

This week we have a guest post from Lisa Munoz, the Public Information Officer at Society for Personality and Social Psychology. She spoke at the recent SPSP conference in New Orleans on how researchers can get their message across in the media – today she tells us why she thinks that’s a worthwhile thing to do. 

A Love Letter for Public Outreach

As I write this post, it is Valentine’s Day week, one of the busiest weeks fordog-with-valentines-day-heart (1) psychology in the news. Relationships, gift-giving, sex, cultural norms, group dynamics – all provide fertile ground for popular press stories at this time of year. This media draw toward psychology may make some scientists wary as they wonder, for example, if the press will misrepresent their work just to get out a cute Valentine’s story. While the chance always exists that a reporter will distort or water down your research to “sell” a sexier or cuter story, to deny yourself the opportunity to reach a broader audience would be a huge disservice.

I can list at least a dozen good reasons to talk with the press and the public about scientific work: among them, publicizing your research to potential funding agencies and future collaborators; attracting people to your specific field of study; and raising the profile of the science. A sometimes overlooked reason for talking with the press is simply to share the excitement and joy of your research with others who have similar interests.

You study social psychology to explore questions about human behavior that have piqued your curiosity throughout your life. Undoubtedly, most of us have or will have asked ourselves some of the same questions at some point in time. It is rare to be in a profession that shares so much in common with so many people, thus putting you in the rare position to constantly teach and share.

Just this past Sunday, Eli Finkel, a social psychologist at Northwestern University, wrote an Op-ed in the New York Times about his relationship research – yes, taking advantage of Valentine’s Day. His work found that married couples who spent just 7 minutes at a time, 3 times a year writing about their fights from a neutral point of view were happier in their marriages. The title of the Op-Ed, “Dear Valentine, I Hate It When You …”, is cute yes but the message is far from trivial: For all couples, this research hits home, offering insight into how we can more constructively tackle relationship problems.

Reaching out to the media to share your research is an enriching experience that I hope you all will undertake throughout your careers.

Lisa M.P. Munoz, Public Information Officer, SPSP

spsp.publicaffairs@gmail.com | @SPSPnews