Recently, I was asked the following fantastically thought-provoking question:
In talking to my psychology seminar group about their qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic analysis. She suggests including diagrammatic representations of the themes, together with quantitative data about how many participants mentioned the theme, and it’s subparts. This appealed to the psychology students because it gives them quantitative data and helped them see how prevalent that theme was within the sample.
And then today I saw another paper “Supportingthinking on sample sizes for thematic analyses: a quantitative tool". It argues that one should consider the power of the study when deciding on sample size – another concept I’d only seen in quantitative research.
Both of these sources seem to be conducting qualitative analysis with at least a nod towards some of the benefits of quantitative data, which appears to make qualitative analysis have more rigor. Of course, simply adding numbers doesn’t necessarily make something more rigorous but it does add more information to results of an analysis and this could influence the reader’s perception of the quality of the research. However, I don’t recall seeing this is any HCI papers. Why isn’t it used more often?
The answer (or at least, my answer) hinges on nuances of research tradition that are not often discussed explicitly, at least in HCI:
Joffe, Fugard and Potts are all thinking and working in a positivist tradition that assumes an independent reality ‘out there’, that doesn’t take into account the role of the individual researcher in making sense of the data. Numbers are great when they are meaningful, but they can hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants had a carer and how many had a helper. That’s a couple of numbers. But the really interesting understanding comes in how those people (whether trained as a carer or just acting as a helper) work with the patient to manage home haemodialysis, and how that impacts on their sense of being in control, how they stay safe, their experience of being on dialysis, and the implications for the design of both the technology and the broader system of care. Similarly, we could report how many of their participants reported feeling scared in the first weeks of dialysis, but that didn’t get at why they felt scared or how they got through that stage. We could now run a different kind of study to tease out the factors that contribute to people being scared (having established the phenomenon) and put numbers on them, but to get the larger (60-80) participants needed for this kind of analysis would involve scouring the entire country for willing HHD participants and getting permission to conduct the study from every NHS Trust separately; I’d say that’s a very high cost for a low return.
Numbers don’t give you explanatory power and they don’t give you insights into the design of future technology. You need an exploratory study to identify issues; then a quantitative analysis can give the scale of the problem, but it doesn’t give you insight into how to solve the problem. For HCI studies, most people are more interested in understanding the problem for design than in doing the basic science that’s closer to hypothesis testing. Neither is right or wrong, but they have different motivations and philosophical bases.
Wolcott (p.36) quotes a biologist, Paul Weiss, as claiming, “Nobody who followed the scientific method ever discovered anything interesting.” The quantitative approach to thematic analysis doesn’t allow me to answer many of the questions I find interesting, so I’m not going to shift in that direction just to do studies that others consider more rigorous. Understanding the prevalence of phenomena is important, but so is understanding the phenomena, and the techniques you need for understanding aren’t always compatible with those you need for measuring prevalence. Unfortunately!