Friday, 7 April 2017

If the user can’t use it, it doesn’t work: focusing on buying and selling


"If the user can’t use it, it doesn’t work": This phrase, from Susan Dray, was originally addressed at system developers. It presupposes good understanding of who the intended users are and what their capabilities are. But the same applies in sales and procurement.

In hospital (and similar) contexts, this means that procurement processes need to take account of who the intended users of any new technology are. E.g., who are the intended users of new, wireless integrated glucometers or of new infusion pumps that need to have drug libraries installed, maintained... and also be used during routine clinical care? What training will they need? How will the new devices fit into (or disrupt) their workflow? Etc. If any of the intended users can’t use it then the technology doesn’t work.

I have just encountered an analogous situation with some friends. These friends are managing multiple clinical conditions (including Alzheimer’s, depression, the after-effects of a mini-stroke, and type II diabetes) but are nevertheless living life to the full and coping admirably. But recently they were sold a sophisticated “Agility 3” alarm system, comprising a box on the wall with multiple buttons and alerts, a wearable “personal attack alarm”, and two handheld controllers (as well as PIR sensors, a smoke alarm and more). They were persuaded that this would address all their personal safety and home security needs. I don’t know whether the salesperson referred directly or obliquely to any potential physical vulnerability. But actually their main vulnerability was that they no longer have the mental capacity to assess the claims of the salesperson, let alone the capacity to use any technology that is more sophisticated than an on/off switch. If the user can’t use it, it doesn’t work. By this definition, this alarm system doesn’t work. Caveat emptor, but selling a product that is meant to protect people when the net effect is to further expose their vulnerability is crass miss-selling. How ironic!

Wednesday, 15 March 2017

Safer Healthcare



I've just finished reading Safer Healthcare. For me, the main take-home message is the different kinds of safety that pertain to different situations. Vincent and Amalberti describe three different approaches to safety:
  • ultra-safe, avoiding risk, amenable to standardised practices and checklists. This applies to the areas of healthcare where it is possible to define (and follow) standardised procedures.
  • high-reliability, managing risks, which I understand as corresponding to "resilient" or "safety II" – empowering people within the system to learn and adapt. This seems to apply to a lot of healthcare, where the variabilities can't be eliminated, but can be managed.
  • ultra-adaptive, embracing risk. This relies on the skills and resilience of individuals. This applies to innovative techniques (the very first heart transplant, for example) where it really isn't possible to plan fully ahead of time because so much is unknown and it relies on the skills of the individual.
Image may contain: outdoorThe authors draw on the example of rock climbing. The safest forms of climbing (with a top-rope, which really does minimise the chances of hitting the ground from a fall) are in the first category; most climbing falls into the second: we manage risk by carefully following best practice while accepting that there are inherent risks; people more adventurous than me (and more skilled) push the boundaries of what is possible – both for themselves and for the community. But it is also possible to compromise safety, as graphically described by James McHaffie addressing Eve Lancashire whose attitude to safety worries him (see about half way through the post).

Vincent and Amalbeti's categorisation highlights why comparing healthcare with aviation in terms of safety is of limited value: commercial aviation is, in their terms, ultra-safe, with standardised procedures and a lot of barriers to risk; healthcare involves far too much variability to all be amenable to such an approach.

Another point Vincent and Amalberti make is that incidents / harm very often don't happen within one episode of care, but evolve over time. I am reminded of a similar point made in a very different context by Brown and Duguid, who described the way that photocopier engineers learn about their work (and the variability across machines and situations): the describe it as being like the "passage of the sun across the sky" – i.e., it's not really clear when it starts or end, or even exactly how it develops moment to moment. So many activities – and incidents – don't have a clear start and end. Possibly the main thing that distinguishes a reportable incident is that there is a point at which someone realises that something has gone wrong...

Sunday, 12 March 2017

Public health -- personal health



I've just re-read the Academy of Medical Sciences report "Improving the health of the public by 2040". It makes many insightful points, particularly about the need for multidisciplinary training to deliver future professionals who can work across disciplinary silos – whether within healthcare and medical disciplines or with other disciplines such as computing and other branches of engineering. Also, the likely importance of digital tools and "big data" in the future. It does, however, focus entirely on the population, apparently ignoring the fact that the population is made up of individuals, who each control their own health – at least to the extent that they can choose to comply (or adhere) with medical advice and can choose whether or not to share data about themselves. It seems to miss a big opportunity if we don't link the individual to the population because the health outcomes and practices of the population emerge from the individual behaviours of each person. Sure, the behaviours of individuals are shaped by population-level factors, but they aren't determined by them. It's surely time to link the individual and the population better.


This can be compared with the Wachter Review, which focused on the value of electronic health records and other digital technologies for delivering safer and more effective care. That review also highlighted the need for professionals with skills that cross information technologies and clinical expertise, but it also considers issues such as engagement and usability. It notes that "implementing health IT is one of the most complex adaptive changes in the history of healthcare". Without addressing the complexity (which is a consequence of the number of individuals, roles, organisations and cultures involved), it's going to be difficult to achieve population-level improvements – by 2040, or at any time.

Tuesday, 22 November 2016

The total customer experience

Last week, I had a delivery from DPD. At one level, it was very mundane (I received and signed for a parcel). At another, it was very positive: I could choose my deliver time to within an hour; I could even elect for a "green" slot when they were going to be in the area anyway (which obviously reduces their cost as well as simplifying my choice). Then on the day I could track the movement of my parcel online and anticipate pretty accurately when it would arrive. The user interface was good, and it was the "front end" of a good system that worked well. This made the overall experience of choosing, ordering and receiving the product much more pleasurable than it might otherwise have been.

In contrast, Samuel Gibbs reports on his experience of using novel Internet of Things tools to do something comparable for frequently bought products. Quite apart from the prospect of having dozens of IoT devices stuck up around the home, he highlights the challenges of receiving the goods once ordered, and of receiving goods in impractically large quantities. These new technologies aren't just about an easy-to-use button-press (like my "easy" button), but about the total customer experience of choosing, ordering and receiving... and someone needs to think that through properly too.

Tuesday, 15 November 2016

Making time for mindfulness

You can't just design a new technology and assume people will use it. The app stores are littered with apps that are used once, or not at all. It's important to understand how people fit technologies into their lives (and how the design of the technology affects how it's used). We choose to use apps (or to be open to responding to them) in ways that depend on time and place. For example, on the train in the morning, lots of commuters seem to be accessing news via apps: it's a good opportunity to catch up with what's happening in the world, and my journey's an appropriate length of time to do that in.

We've recently published a paper on how people make time for mindfulness practices.
Participants were mostly young, urban professionals (so possibly not representative of a more general population!), and their big challenge was how to fit meditation practices in their busy lives. Mindfulness is difficult to achieve on a commute, for example, so people need to explicitly make time for it, in a place that feels right. There was a tension between making it part of a routine (and something that "has to be done" and making it feel like a choice (spontaneous?). But there were lots of other factors that shape when, how and whether people used the mindfulness app, such as their sense of self-efficacy (how much they feel in control of their lives), their mood (mindfulness when your upset or angry just isn't going to happen – not in ten minutes, anyway), and attitudes of friends to mindfulness (peer pressure is very powerful).

Some of these are factors that can't be designed for – beyond recognising that a mindfulness app isn't going to work for all people, or in all situations. Others can, perhaps, be designed for: such as managing people's expectations of what differences mindfulness might make in their lives, and giving guidance on when and how to fit in app use. What are some of the take-homes?
  • that incidental details (like the visual appearance or the sound of someone's voice) matter;
  • that people are one a 'journey' of learning how to practice mindfulness (don't force an expert to start at the beginning just because they haven't used this particular app before, for example);
  • that people need to learn how to fit app use and mindfulness into their lives, and expectations need to be managed; and
  • that engaging with the app isn't the same as engaging with mindfulness... but the one can be a great support for the other in the right circumstances.
 





Friday, 28 October 2016

Guidance on creating, evaluating and implementing effective digital healthcare interventions

This is an unconventional blog post – essentially, a place to index a set of papers. Last year, I participated in a workshop: ‘How to create, evaluate and implement effective digital healthcare interventions: development of guidance’. 
The workshop was led by Susan Michie, and resulted in a set of articles discussing key issues facing the development and evaluation of digital behaviour change interventions. There were about 50 participants, from a variety of countries and disciplines. And we all had to work ... on delivering interdisciplinary papers as well as on discussion. The outcome has just been published.
Credits: The workshop was hosted in London by the Medical Research Council, with funding from the Medical Research Council (MRC)/National Institute for Health Research (NIHR) Methodology Research Program, the NIH Office of Behavioral and Social Sciences Research (OBSSR)  and the Robert Wood Johnson Foundation.The workshop papers are being made publicly available with the agreement of the publishers of the American Journal of Preventive Medicine.

Thursday, 20 October 2016

If the user can't use it, it doesn't work: the invisible costs of bad software

This is a quick rant about unusable enterprise systems and turning visible costs into invisible costs. For an earlier, longer, discussions about different unusable systems, see my reviews of ResearchFish and an electronic healthcare system.

Yesterday, I was one of several people asked to use the Crown Commercial Services system to review some documents related to a bid for one of our public funding bodies. The use of this system is apparently mandated for that organisation.

I was sent instructions on how to do part of the process (which I could not have worked out from the user interface). I followed the instructions provided as far as they were relevant, and I then explored some more to try to locate the documents of the actual bids (which appeared to comprise 28 separate documents for eight bids). Then I tried to download them all in one file. 30 minutes later, the system timed out on me while still processing to create that file. When I logged back in I couldn’t locate the download window again without simply doing all the same actions a second time. And I ran out of time, energy or will to pursue this.

This is yet another example of a system where there is no evidence that the developers ever considered how the system would be used, by whom, under what circumstances, the learning curve to use it first time ... or anything else about the users. Susan Dray has a nice claim: "If the user can't use it, it doesn't work". This is yet another enterprise system that is absolutely not fit for purpose.

What this does is to shift costs from development (investing in making a system that is fit for purpose) to use (forcing every user of the system to waste time trying to achieve their goals despite the system). The former would be a visible cost to the developers and the people who commissioned the system while the latter is an invisible cost borne in all the stress and loss of productivity of the people who have to use the system. For the UK Research Evaluation Framework (REF), these invisible costs were estimated at almost £250 million. That was a one-off exercise; there should be a practice of estimating the annual costs of unusable enterprise systems. I'm pretty confident that the invisible costs would turn out to be significantly greater than the visible costs of creating a system that was fit for purpose in the first place. And we know how to do it. We have known how to do it for decades!