Friday 31 August 2012

Inarticulate? The challenges of health information seeking

Showing impeccable timing, three people I care about have fallen ill at the same time. To make sense of what it happening to each of them, I have been doing a lot of internet searching. And it has become really clear that – as a lay person – some health information needs are much easier to satisfy than others. Paradoxically, it's the more technical ones that are easier to work with. Or more precisely: the ones for which a key technical term is provided (e.g. by a clinician).

In one case, we were told that Bert (not his real name) needed an angioplasty. I had no idea what one of those was, but a quick search on the query term "angioplasty" gave several search results that were consistent with each other, comprehensible and credible. Following up on that and related terms has meant that I now (rightly or wrongly) feel that I understand fairly well what Bert has gone through and what implications it has for the future.

In a second case, Alf (also not real name) told me that the excruciating pain he had been experiencing had been diagnosed as gallstones, and in particular a stone that had lodged in the bile duct. The treatment was a procedure (not an operation) that involved putting a tube through his nose and down into his gall bladder and removing the stone. Any search that I tried with terms such as "gallstones", "removal", "nose" led to sites about "cholecystectomy" (i.e. either laparoscopic [keyhole] or open surgery). We both knew that Alf had not had an operation. It took hours of searching with different terms to find any information that even approximately matched what Alf and I knew. Eventually, I tried terms involving "camera" and "gallstones", which led to "endoscopy". As I type, I believe that Alf had a "endoscopic retrograde cholangiopancreatography". I can't even pronounce those terms, never mind spell them. But if you know the terms then there are pretty good descriptions of what they involve that really helps the lay person to make sense of the treatment.

In the third case, Clarissa (not real name) was incredibly tired. Her doctor had dismissed it as "a virus". I've seen a virus being defined as "a condition that the doctor can't diagnose in detail but isn't worried about". But this "virus" had been around for weeks. What is happening? Well most internet searches that involve the word "fatigue" and any other symptom seem to lead to results about "cancer". That's not what you want to find. And it's not what I believe. I'm still trying to make sense of what might be affecting Clarissa. I don't have a good search term, and I can't find one.

Health is an area that affects us all. We all want to make sense of conditions that affect us and our loved ones. But there is a huge terminological gulf between lay language for describing health experiences and the technical language of professionals. If you know the technical "keys" then it's easy to find lay explanations, but the opposite is not yet true: if you only have a lay way of talking about health experiences then there's no easy way to tap in to a sophisticated health information understanding. This isn't an easy challenge; I wonder whether anyone can rise to it.

Thursday 16 August 2012

"He's got dimples!": making sense of visualisations

Laura's baby is due in 2 months, so time to get a 3D scan... and the first thing that Laura told me after the scan was that "he's got dimples!" I'm sure that if there had been any problem detected, that would have been mentioned first, but no: the most important information is that he has dimples, just like her. But for the radiographer doing the scan, it's likely that dimples came way down the list of features to look out for (after formation of the spine, whether the cord is around his neck, how large his head is...). Conversely, when her uncle looked at pictures from the scan, his main comment was about the way it looked as if there was a light shining on the baby. And I wanted to know what the strange shape between chin and elbow was (I still don't know...).

3D image of baby in womb


People look at scenes and scans in different ways, and notice different features of them. They "make sense" of the visual information in different ways. Some are concerned with syntactic features such as aspects of the image quality. Some are more concerned with the semantics: what it means (in this case, for the health of the child, or what he will look like). Yet others may be more concerned with the pragmatics: how information from the scene can inform action – this might have been the case if the scan were being used by a surgeon to guide them during a live operation.

Scanning technology has come on in leaps and bounds over recent decades: the ultrasound scan I had before Laura was born was difficult to even recognise as a baby as a still image: a naive viewer could only make sense of the whole by seeing how the parts moved together. Advances in technology have meant that what used to be difficult interpretation tasks for the human have been made much easier. And they have made more information potentially available (I didn't even know whether Laura was a boy or a girl until she was born, never mind whether or not she had dimples).


New technologies create many new possibilities – for monitoring, diagnosis, treatment, and even for joy. In this case, they've made the user's interpretation task much easier and made more information available. The scan is for well defined purposes, and the value of the visualisation is that it takes a large volume of data and presents it in a form that really makes sense. There is lots of information about the baby that the 3D scan does not provide, but for its intended purpose it is delightful.

Sunday 12 August 2012

The right tool for the job? Qualitative methods in HCI

It's sad to admit it, but my holiday reading has included Carla Willig's (2008) text on qualitative research methods in psychology and Jonathan Smith's (2007) edited collection on the same topic. I particularly enjoyed the chapters by Smith on Interpretive Phenomenological Analysis and by Kathy Charmaz on Grounded Theory in the edited collection. One striking feature of both books is that they have a narrative structure of "here's a method; here are its foundations; this is what it's good for; this is how to apply it". In other words, both seem to take the view that one becomes an expert in using a particular method, then builds a career by defining problems that are amenable to that method.

One of the features of Human–Computer Interaction (HCI) as a discipline is that it is not (with a few notable exceptions) fixated on what methods to apply. It is much more concerned with choosing the right tools for the job at hand, namely some aspect of the design or evaluation of interactive systems that enhance the user experience, productivity, safety or similar. So does it matter whether the method applied is "clean" Grounded Theory (in any of its variants) or "clean" IPA? I would argue not. The problem, though, is that we need better ways of planning qualitative studies in HCI, and then of describing how data was really gathered and what analysis was performed, so that we can better assess the quality, validity and scope of the reported findings.

There's a trade-off to be made between doing studies that can be done well because the method is clear and well-understood and doing studies that are important (e.g. making systems safer) but for which the method is unavoidably messy and improvisational. An important challenge for HCI (which has always adopted and adapted methods from other disciplines that have stronger methodological foundations) is to develop a better set of methods that address the important research challenges of interaction design. These aren't limited to qualitative research methods, but that is certainly one area where it's important to have a better repertoire of techniques that can be applied intelligently and accountably to address exciting problems.