Sunday, 22 April 2012

Making sense of health information

A couple of people have asked me why I'm interested in patients' sensemaking, and what the problem is with all the health information that's available on the web. Surely there's something for everyone there? Well maybe there is (though it doesn't seem that way), but both our studies of patients' information seeking and personal experience suggest that it's far from straightforward.

Part of the challenge is in getting the language right: finding the right words to describe a set of symptoms can be difficult, and if you get the wrong words then you'll get inappropriate information. And as others have noted, the information available on the internet tends to be biased towards more serious conditions, leading to a rash of cyberchondria. But actually, diagnosis is only a tiny part of the engagement with and use of health information. People have all sorts of questions, such as "should I be worried?" or "how can I change my lifestyle?", and much more individual and personal issues, often not focusing on a single question but on trying to understand an experience, or a situation, or how to manage a condition. For example, there may be general information on migraines available, but any individual needs to relate that generic information to their own experiences, and probably experiment with trigger factors and ways of managing their own migraine attacks, gradually building up a personal understanding over time, using both external resources and individual experiences.

The literature describes sensemaking in different ways that share many common features. Key elements are that people:
  • look for information to address recognised gaps in understanding (and there can be challenges in looking for information and in recognising relevant information when it is found).
  • store information (whether in their heads or externally) for both immediate and future reference.
  • integrate new information with their pre-existing understanding (so sensemaking never starts from a blank slate, and if pre-existing understanding is flawed then it may require a radical shift to correct that flawed understanding).
One important element that is often missing from the literature is the importance of interpretation of information: that people need to explicitly interpret information to relate to their own concerns. This is particularly true for subjects where there are professional and lay perspectives, languages and concerns for the same basic topic. Not only do professionals and lay people (clinicians and patients in this case) have different terminology; they also have different concerns, different engagement, different ways of thinking about the topic.

Sensemaking is about changing understanding, so it is highly individual. One challenge in designing any kind of resource that helps people make sense of health information is recognising the variety of audiences for information (prior knowledge, kinds of concerns, etc.) and making it easy for people to find information that is relevant to them, as an individual, right here and now. People will always need to invest effort in learning: I don't think there's any way around that (indeed, I hope there isn't!)... but patients' sensemaking seems particularly interesting because we're all patients sometimes, and because making sense of our health is important, but could surely be easier than it seems to be right now.

Sunday, 15 April 2012

The pushmepullyou of conceptual design

I've just been reading Jeff Johnson's and Austin Henderson's new book on 'conceptual models'. They say (p.18) that "A conceptual model describes how designers want users to think about the application." At first this worried me: surely the designers should be starting by understanding how users think about their activity and how the application can best support users?

Reading on, it's obvious that putting the user at the centre is important, and they include some compelling examples of this. But the question of how to develop a good conceptual model that is grounded in users' expectations and experiences is not the focus of the text: the focus is on how to go from that to an implementation. This is a very complementary approach to ours on CASSM, where we've been concerned with how to elicit and describe users' conceptual models, and then how to support them through design.

It seems to be impossible to simultaneously put both the user(s) and the technology at the centre of the discourse. In focusing on the users, CASSM is guilty of downplaying the challenges of implementation. Conversely, in focusing on implementation, Johnson and Henderson de-emphasise the challenges of eliciting users' conceptual models. These can seem, like the pushmepullyou from Dr Doolittle, to be pulling in opposite directions. But this text is a welcome reminder that conceptual models still matter in design.

Thursday, 5 April 2012

KISS: Keep It Simple, Sam!

Tony Hoare is credited with claiming that... "There are two ways of constructing a software design; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." Of course, he is focusing on software: on whether it is easy to read or test, or whether it is impossible to read (what used to be called "spaghetti code" but probably has some other name now), and impossible to devise a comprehensive set of tests for.

When systems suffer "feature creep", where they acquire more and more features to address real or imagined user needs, it's nigh on impossible to keep the code simple, so inevitably it becomes harder to test, and harder to be confident that the testing has been comprehensive. This is a universal truth, and it's certainly the case in the design of software for infusion devices. The addition of drug libraries and dose error reduction software, and the implementation of multi-function systems to be used across a range of settings for a variety of purposes, makes it increasingly difficult to be sure that the software will perform as intended under all circumstances. There is then a trade-off between delivering a timely system, or delivering a well designed and well tested system... or delivering a system that then needs repeated software upgrades as problems are unearthed. And you can never be sure you've really found all the possible problems.

These aren't just problems for the software: they're also problems for the users. When software upgrades change the way the system performs, it's difficult for the users to predict how it will behave. Nurses don't have the mental resources to be constantly thinking about whether they're working with the infusion device that's running version 3.7 of the software or the one that's been upgraded to version 3.8, or to anticipate the effects of the different software versions, or different drug libraries, on system performance. Systems that are already complicated enough are made even more so by such variability.

Having fought with several complicated technologies recently, my experience is not that they have no obvious deficiencies, but that those deficiencies are really, really hard to articulate clearly. And if you can't even describe a problem, it's going to be very hard to fix it. Better to avoid problems in the first place: KISS!