Tuesday, 8 April 2014

A mutual failure of discovery: DIB and DiCoT

Today, I have been doing literature searching for a paper on Distributed Cognition (DCog). By following a chain of references, I happened upon a paper on Determining Information Flow Breakdown (DIB). DIB is a method for applying the theory of DCog in a semi-structured way in complex settings. The example the authors use in the paper comes from healthcare.

The authors state that "distributed cognition is a theoretical approach without an accepted analytical method; there is no single 'correct way' of using it. [...] the DIB method is a practical application of the theory." At the time that work was published (2007), there were at least two other published methods for applying DCog: the Resources Model (2000) and DiCoT (Distributed Cognition for Teamwork; 2006). The developers of DIB were clearly unaware of this previous work. Conversely, it has taken me seven years from when the DIB paper was published to become aware of it and my team have been working on DCog in healthcare for most of that time. How could that happen?

I can think of several answers involving parallel universes, different literatures, too many different journals to keep track of, the fragility of search terms, needles in haystacks. You take your pick.

Whatever the answer actually is (and it's probably something to do with a needle in another universe), it's close to being anti-serendipity: a connection that is obvious and should have been expected. We clearly have some way to go in developing information discovery tools that work well.

Saturday, 5 April 2014

Never mind the research, feel the governance

In the past 5 days, I have received and responded to:
  • 16 emails from people in the university, the REC and the hospital about one NHS ethics application that required a two-word change to one information sheet after it had already been approved by both the university and the REC - but the hospital spotted a minor problem and now it has to go around the whole cycle again, which is likely to take several weeks at least.
  • 6 emails about who exactly should sign one of the forms in a second ethics application (someone in the university or the hospital).
  • 12 emails about the set of documents (I lost count of what's needed past 20 items) needed for a third application.
I dread to think what the invisible costs of all these communications and actions are, when scaled up to all the people involved in the process (and my part is a small one because I delegate most of the work to others), and to all the ethics applications that are going on in parallel.

I thought I was getting to grips with the ethics system for the NHS; I had even thought that it was getting simpler, clearer and more rational over time. But recent experiences show otherwise. This is partly because we're working with a wider range of hospitals than previously, and every one seems to have its own local procedures and requirements. Some people are wonderful and really helpful; others seem to consider it to be their job to find every possible weakness and block progress. I have wondered at times whether this is because we are not NHS employees (or indeed even trained clinicians). But it seems not: clinical colleagues report similar problems; in fact, they've put a cost on the delays that they have experienced through the ethical clearance process. Those costs run into hundreds of thousands of pounds. We don't do research to waste money like this, but to improve the quality and safety of patient care.

Today, there's an article in the Guardian about the under-resourcing of the health service and the impact this is having on patient care. Maybe I'm naive, but if the inefficiencies that we find in the process of gaining permission to conduct a research study in the NHS are replicated in all other aspects of health service delivery, it's no wonder they feel under-resourced.

Tuesday, 1 April 2014

Looking for the keys under the lamp post? Are we addressing the right problems?

Recently, I received an impassioned email from a colleague: "you want to improve the usability of the bloody bloody infusion pump I am connected to? give it castors and a centre of gravity so I can take it to the toilet and to get a cup of coffee with ease". Along with photos to illustrate the point.

He's completely right: these are (or should be) important design considerations. People still want to live their lives and have independence as far as possible, and that's surely in the interests of staff as well as patients and their visitors.

In this particular case, better design solutions have been proposed and developed. But I've never seen one of these in use. I've seen plenty of other improvised solutions such as the bed-bound patient being wheeled from one ward to another with a nurse walking alongside holding up the bag of fluid while the pump is balanced on the bed with the patient.

Why don't hospitals invest in better solutions? I don't know. Presumably because the problem is invisible to the people who make purchasing decisions, because staff and patients are accustomed to making do with the available equipment, and because better equipment costs more but has minimal direct effect on patient outcomes.

An implication of the original message is that in CHI+MED we're addressing the wrong problem: that in doing research on interaction design we're missing the in-your-face problem that the IV pole is so poorly designed. That we're like the drunk looking for the keys under the lamp post because that's where the light is, when in fact the keys got dropped somewhere else. Others who claim that the main problem in patient safety is infection control are making the same point: we're focusing our attention in the wrong place.

I wish there were only one problem to solve – one key to be found, under the lamp post or elsewhere. But that's not the case. In fact, in healthcare there are so many lost keys that they can be sought and found all over the place. Excuse me while I go and look for some more...



Thursday, 27 March 2014

Mind the gap: the gulfs between idealised and real practice

I've given several talks and written short articles about the gap between idealised and real practice in the use of medical devices. But to date I've blurred the distinctions between concerns from a development perspective and those from a procurement and use perspective.

Developers have to make assumptions about how their devices will be used, and to design and test (and build safety cases, etc.) on that basis. Their obligation (and challenge) is to make the assumptions as accurate as possible for their target market segment. And to make the assumptions as explicit as possible, particularly for subsequent purchasing and use. This is easier said than done: I write as someone who signed an agreement on Tuesday to do a pile of work on our car, most of which was required but part of which was not; how the unnecessary work got onto the job sheet I do not know, but because I'd signed for it, I had to pay for it. Ouch! If I can accidentally sign for a little bit of unnecessary work on the car, how much easier is it for a purchasing officer to sign off for unnecessary features, or slightly inappropriate features, on a medical device? [Rhetorical question.]

Developers have to work for generic market segments, whether those are defined by the technological infrastructure within which the device sits, the contexts and purposes for which the device will be used, the level of training of its users, or all of the above. One device probably can't address all needs, however desirable 'consistency' might be.

In contrast, a device in use has to fit a particular infrastructure, context, purpose, user capability... So there are many knowns where previously there were unknowns. And maybe the device fits well, and maybe it doesn't. And if it doesn't, then something needs to change. Maybe it was the wrong device (and needs to be replaced or modified); maybe it's the infrastructure or context that needs to be changed; maybe the users need to be trained differently / better.

When there are gaps (i.e., when technology doesn't fit properly), people find workarounds. We are so ingenious! Some of the workarounds are mostly positive (such as appropriating a tool to do something it wasn't designed for, but for which it serves perfectly well); some introduce real vulnerabilities into the system (by violating safety features to achieve a short-term goal). When gaps aren't even recognised, we can't even think about them or how to design to bridge them. We need to be alert to the gaps between design and use.

Sunday, 16 March 2014

Collaborative sensemaking under uncertainty: clinicians and patients

I've been discussing a couple of 'conceptual change' projects with clinicians, both of them in topic areas (pain management and contraception) where the clinical details aren't necessarily well understood, even by most clinicians. I have been struck by a few points that seem to me to be important when considering the design of new technologies to support people in managing their health:
  1. Different people have different basic conceptual structures onto which they 'hang' their understanding. The most obvious differences are between health professionals (who have received formal training in the subject) and lay people (who have not), but there are also many individual differences. In the education literature, particularly building on the work of Vygotsky, we find ideas of the 'Zone of Proximal Development' and of 'scaffolding'. The key point is that people build on their existing understanding, and ideas that are too far from that understanding, or are expressed in unfamiliar terms, cannot be assimilated. In the sensemaking literature, Klein discusses this in terms of 'frames', while Pirolli, Card and Russell discuss the process of making sense of new information in terms of how people look for and integrate new information with existing understanding guided by the knowledge gaps of which they are aware. In all of these literatures, and others, it's clear that any individual starts from their current understanding and builds on it, and that significant conceptual change (throwing out existing ideas and effectively starting again from scratch) is difficult. This makes it particularly challenging to design new technologies that support sensemaking because it's necessary to understand where someone is starting from in order to design systems that support changing understanding.
  2. One of the important roles of clinicians is to help people to make sense of their own health. In the usual consultation, this is a negotiative process, in which common ground is achieved – e.g., by the clinician having a repertoire of ways of assessing the patient's current understanding and building on it. The clinician's skills in this context are not well understood, as a far as I'm aware.
  3. For many patients, the most important understanding is 'what to do about it': it's not to get the depth of understanding that the clinician has, but to know how to manage their condition and to make appropriately informed decisions. Designing systems to support people in obtaining different depths and types of understanding is an exciting challenge.
  4. Health conditions can be understood at many different levels of abstraction (from basic chemistry and biology through to high-level causal relations), and we seem to employ metaphors and analogies to understand complex processes. Inevitably, these have great value, but also break down when pushed too far. There's probably great potential in exploring the use of different metaphors and explanations to support people in managing their health.
  5. As people are being expected to take more responsibility for their own health, there's a greater onus on clinicians to support patients' understanding. Clinicians may have particular understanding that they want to get across to patients, but it needs to be communicated in different ways for different people. And we need to find ways of managing the uncertainty that still surrounds much understanding of health (e.g. risks and side-effects).
All these points make it essential to consider Human Factors in the design of technologies to support conceptual change, behavioural change and decision making in healthcare, so that we can close the gap between clinicians' and patients' understanding in ways that work well for both.

Friday, 8 November 2013

That was easy: Understanding Usability and Use

For a long time (measured in years rather than days or weeks), I've been struggling with the fact that the word "usability" doesn't seem to capture the ideas that I consider to be important. Which are about how well a device actually supports a person in doing the things they want to do.

Some time ago, a colleague (apparently despairing of me) gave me a gift: a big red button that, when you press it, announces that "That was easy". Yep: easy, but also (expletive deleted) pointless.

So if someone is given an objective ("Hey, press this button!") then ease of use is important, and this button satisfies that need. Maybe the objective is expressed less directly ("Press a red button", which would require finding the red button to press, or "Do something simple", which could be interpreted in many different ways), and the role of the "easy" button isn't so obvious. Ease of use isn't the end of the story because, while it's important that it is easy to do what you want to do, it's also important that what you want to do is something that the device supports easily. In this case, there probably aren't many people who get an urge to press an "easy" button. So it's easy, but it's not useful, or rewarding (the novelty of the "easy" button wore off pretty fast).

So it doesn't just matter that a system is usable: it also matters that that system does the things that the user wants it to do. Or an appropriate subset of those things. And in a way that makes sense to the user. It matters that the system has a use, and fits the way the user wants to use it.

That use may be pure pleasure (excite, titillate, entertain), but many pleasures (such as that of pressing an "easy" button) wear off quickly. So systems need to be designed to provide longer term benefit... like really supporting people well in doing the things that matter to them – whether in work or leisure.

Designing for use means understanding use. It means understanding the ways that people think about use. In quite a lot of detail. So that use is as intuitive as possible. That doesn't mean designing for oneself, but learning about the intended users and designing for them. And no designing things that are "easy" but inappropriate!

Thursday, 31 October 2013

Different ways of interacting with an information resource

I'm at a workshop on how to evaluate information retrieval systems, and we are discussing the scope of concern. What is an IR system, and is the concept still useful in the 21st Century, where people engage with information resources in many different ways? The model of information seeking in sessions for a clear purpose still holds for some interactions, but it's certainly not the dominant practice any more.

I was struck when I first used the NHS Choices site that it encourages exploration above seeking: it invites visitors to consume health information that they hadn't realised that they might be interested in. This is possible with health in a way that it might not be in some other areas because most people have some inherent interest in better understanding their own health and wellbeing. At least some of the time! Such sites encourage unplanned consumption, hopefully leading to new understanding, without having a particular curriculum to impart.

On the way here, I read a paper by Natalya Godbold in which she describes the experiences of dialysis patients. One of the points she makes is that people on dialysis exploit a wide range of information resources in managing their condition – importantly, including how they feel at the time. This takes embodied interaction into a new space (or rather, into a space in which it has been occurring for a long time without being noticed as such): the interaction with the technology affects and is informed by the experienced effects that flow (literally as well as metaphorically) through the body. And information need, acquisition, interpretation and use are seamlessly integrated as the individual monitors, makes sense of and manages their own condition. The body, as well as the world around us, is part of the ecology of information resources we work with, often without noticing.

While many such resources can't be "designed", it's surely important to recognise their presence and value when designing explicit information resources and IR systems.