Saturday 18 May 2013

When is a medical error a crime?

I've recently had Collateral Damage recommended to me. I'm afraid I can't face reading it: just the summary is enough. Having visited Johns Hopkins, and in particular the Armstrong Institute for Patient Safety, a couple of months ago, I'm pretty confident that the terrible experience of the Walter family isn't universal, even within that one hospital, never mind nationally or internationally. And therein lies a big challenge: that there is such a wide spectrum of experiences and practices in healthcare that it's very difficult to generalise.

There are clearly challenges:
  • the demands of doing science and of providing the best quality patient care may pull in opposing directions: if we never try new things, relying on what is already known as best practice, we may not make discoveries that actually transform care.
  • if clinicians are not involved in the design of future medical technologies then how can those technologies be well designed to support clinical practice? But if clinicians are involved in their design, and have a stake in their commercial success, how can they remain objective in their assessments of clinical effectiveness?
There are no easy answers to such challenges, but clearly they are cultural and societal challenges as well as being challenges for the individual clinician. They are about what a society values and what behaviours are acceptable and/or rewarded, whether through professional recognition or financially.

I know that I have a tendency to view things positively, to argue for a learning culture rather than a blame culture. Accounts like "Collateral Damage" might force one to question that position as being naive in the extreme. For me, though, the question is: what can society and the medical establishment learn from such an account? That's not an easy question to answer. Progress in changing healthcare culture is almost imperceptibly slow: reports such as "to err is human" and "an organisation with a memory", both published over a decade ago (and the UK report now officially 'archived'), haven't had much perceptible effect. Consider, for example, the recent inquiry into failings in Mid Staffordshire.

Bob Wachter poses the question "when is a medical error a crime?". He focuses on the idea of a 'just culture': that there is a spectrum of behaviours, from the kinds of errors that anyone could make (and for which learning is a much more constructive response than blaming), through 'at risk' behaviours to 'reckless' behaviours where major risks are knowingly ignored.

The Just Culture Community notes that "an organisation's mission defines its reason for being". From a patient's perspective, a hospital's "reason for being" is to provide the best possible healthcare when needed. Problems arise when the hospital's mission is "to generate a profit", to "advance science", or any other mission that might be at odds with providing the best possible care in the short term. The same applies to individual clinicians and clinical teams within the hospital.

I find the idea of a "just culture" compelling. It is not a simple agenda, because it involves balancing learning with blame, giving a sophisticated notion of accountability. It clearly places the onus for ensuring safety at an organisational / cultural level, within which the individual works, interacts and is accountable. But it does presuppose that the different people or groups broadly agree on the mission or values of healthcare. 'Collateral Damage' forces one to question whether that assumption is correct. It is surely a call for reflection and learning: what should the mission of any healthcare provider be? How is that mission agreed on by both providers and consumers? How are values propagated across stakeholders? Etc. Assuming that patient safety is indeed valued, we all need to learn from cases such as this.

Coping with complexity in home hemodialysis

We've just had a paper published on how people who need to do hemodialysis at home manage the activity. Well done to Atish, the lead author.

People doing home hemodialysis are a small proportion of the people who need hemodialysis overall: the majority have to travel to a specialist unit for their care. Those doing home care have to take responsibility for a complex care regime. In this paper, we focus on how people use time as a resource to help with managing care. Strategies include planning to perform actions at particular times (so that time acts as a cue to perform an action); allowing extra time to deal with any problems that might arise; building in time for reflection into a plan (to minimise the risks of forgetting steps); and organising tasks to minimise the number of things that need to be thought about or done at any one time (minimising peak complexity). There is a tendency to think about complex activities in terms of task sequences, and to ignore the details of the time frame in which people carry out tasks, and how time (and our experience of time) can be used as a resource as well as, conversely, placing demands on us (e.g. through deadlines).

This study focused on particular (complex and safety-critical) activity that has to be performed repeatedly (every day or two) by people who may not be clinicians but who become experts in the task. We all do frequent tasks, whether that's preparing a meal or getting ready to go to work, that involve time management. There's great value in regarding time as a resource, to be used effectively, as well as it placing demands on us (not enough time...)

Sunday 12 May 2013

Engineering for HCI: Upfront effort, downstream pay-back

The end of Engineering Practice 1 (c.1980).
Once upon a time, I was a graduate trainee at an engineering company. The training was organised as three-month blocks in different areas of the company. My first three months was on the (work)shop floor. Spending hours working milling machines and lathes was a bit of shock after studying mathematics at Cambridge. You mean it is possible to use your body as well as your mind to solve problems?!?
I learned that engineering was about the art of the possible (e.g. at that time you couldn't drill holes that went around corners, though 3D printing has now changed our view of what is possible). And also about managing precision: manufacturing parts that were precise enough for purpose. Engineering was inherently physical: about solving problems by designing and delivering physical artefacts that were robust and reliable and fit for purpose. The antithesis of the "trust me, I'm an engineer" view (however much that makes me smile).

Enter "software engineering": arguably, this term was coined to give legitimacy to a certain kind of computer programming. Programming was (and often still is) something of a cottage industry: people building one-off systems that seem to work, but no-one is quite sure of how, or when they might break down. Engineering is intended to reduce the variability and improve the reliability of software systems. And deliver systems that are fit for purpose.

So what does it mean to "engineer" an interactive computer system? At the most recent IFIP Working Group 2.7/13.4 meeting, we developed a video: 'Engineering for HCI: Upfront effort, downstream pay-back'. And it was accepted for inclusion in the CHI2013 Video Showcase. Success! Preparing this short video turned out to be even more difficult than I had anticipated. There really didn't seem to be much consensus on what it means to "engineer" an interactive computer system. There is general agreement that it involves some rigour and systematicity, some use of theory and science to deliver reproducible results, but does the resulting system have to be usable, to be fit for purpose? And how would one measure that? Not really clear.

I started by saying that I once worked for an engineering company. That term is probably fairly unambiguous. But I've never heard of an "interactive systems engineering company" or an "HCI engineering company". I wonder what one of those would look like or deliver.