Wednesday, 15 March 2017

Safer Healthcare



I've just finished reading Safer Healthcare. For me, the main take-home message is the different kinds of safety that pertain to different situations. Vincent and Amalberti describe three different approaches to safety:
  • ultra-safe, avoiding risk, amenable to standardised practices and checklists. This applies to the areas of healthcare where it is possible to define (and follow) standardised procedures.
  • high-reliability, managing risks, which I understand as corresponding to "resilient" or "safety II" – empowering people within the system to learn and adapt. This seems to apply to a lot of healthcare, where the variabilities can't be eliminated, but can be managed.
  • ultra-adaptive, embracing risk. This relies on the skills and resilience of individuals. This applies to innovative techniques (the very first heart transplant, for example) where it really isn't possible to plan fully ahead of time because so much is unknown and it relies on the skills of the individual.
Image may contain: outdoorThe authors draw on the example of rock climbing. The safest forms of climbing (with a top-rope, which really does minimise the chances of hitting the ground from a fall) are in the first category; most climbing falls into the second: we manage risk by carefully following best practice while accepting that there are inherent risks; people more adventurous than me (and more skilled) push the boundaries of what is possible – both for themselves and for the community. But it is also possible to compromise safety, as graphically described by James McHaffie addressing Eve Lancashire whose attitude to safety worries him (see about half way through the post).

Vincent and Amalbeti's categorisation highlights why comparing healthcare with aviation in terms of safety is of limited value: commercial aviation is, in their terms, ultra-safe, with standardised procedures and a lot of barriers to risk; healthcare involves far too much variability to all be amenable to such an approach.

Another point Vincent and Amalberti make is that incidents / harm very often don't happen within one episode of care, but evolve over time. I am reminded of a similar point made in a very different context by Brown and Duguid, who described the way that photocopier engineers learn about their work (and the variability across machines and situations): the describe it as being like the "passage of the sun across the sky" – i.e., it's not really clear when it starts or end, or even exactly how it develops moment to moment. So many activities – and incidents – don't have a clear start and end. Possibly the main thing that distinguishes a reportable incident is that there is a point at which someone realises that something has gone wrong...

Sunday, 12 March 2017

Public health -- personal health



I've just re-read the Academy of Medical Sciences report "Improving the health of the public by 2040". It makes many insightful points, particularly about the need for multidisciplinary training to deliver future professionals who can work across disciplinary silos – whether within healthcare and medical disciplines or with other disciplines such as computing and other branches of engineering. Also, the likely importance of digital tools and "big data" in the future. It does, however, focus entirely on the population, apparently ignoring the fact that the population is made up of individuals, who each control their own health – at least to the extent that they can choose to comply (or adhere) with medical advice and can choose whether or not to share data about themselves. It seems to miss a big opportunity if we don't link the individual to the population because the health outcomes and practices of the population emerge from the individual behaviours of each person. Sure, the behaviours of individuals are shaped by population-level factors, but they aren't determined by them. It's surely time to link the individual and the population better.
Agree, Approve, Border, Community, Condone, Crowd
This can be compared with the Wachter Review, which focused on the value of electronic health records and other digital technologies for delivering safer and more effective care. That review also highlighted the need for professionals with skills that cross information technologies and clinical expertise, but it also considers issues such as engagement and usability. It notes that "implementing health IT is one of the most complex adaptive changes in the history of healthcare". Without addressing the complexity (which is a consequence of the number of individuals, roles, organisations and cultures involved), it's going to be difficult to achieve population-level improvements – by 2040, or at any time.

Tuesday, 22 November 2016

The total customer experience

Last week, I had a delivery from DPD. At one level, it was very mundane (I received and signed for a parcel). At another, it was very positive: I could choose my deliver time to within an hour; I could even elect for a "green" slot when they were going to be in the area anyway (which obviously reduces their cost as well as simplifying my choice). Then on the day I could track the movement of my parcel online and anticipate pretty accurately when it would arrive. The user interface was good, and it was the "front end" of a good system that worked well. This made the overall experience of choosing, ordering and receiving the product much more pleasurable than it might otherwise have been.

In contrast, Samuel Gibbs reports on his experience of using novel Internet of Things tools to do something comparable for frequently bought products. Quite apart from the prospect of having dozens of IoT devices stuck up around the home, he highlights the challenges of receiving the goods once ordered, and of receiving goods in impractically large quantities. These new technologies aren't just about an easy-to-use button-press (like my "easy" button), but about the total customer experience of choosing, ordering and receiving... and someone needs to think that through properly too.

Tuesday, 15 November 2016

Making time for mindfulness

You can't just design a new technology and assume people will use it. The app stores are littered with apps that are used once, or not at all. It's important to understand how people fit technologies into their lives (and how the design of the technology affects how it's used). We choose to use apps (or to be open to responding to them) in ways that depend on time and place. For example, on the train in the morning, lots of commuters seem to be accessing news via apps: it's a good opportunity to catch up with what's happening in the world, and my journey's an appropriate length of time to do that in.

We've recently published a paper on how people make time for mindfulness practices.
Participants were mostly young, urban professionals (so possibly not representative of a more general population!), and their big challenge was how to fit meditation practices in their busy lives. Mindfulness is difficult to achieve on a commute, for example, so people need to explicitly make time for it, in a place that feels right. There was a tension between making it part of a routine (and something that "has to be done" and making it feel like a choice (spontaneous?). But there were lots of other factors that shape when, how and whether people used the mindfulness app, such as their sense of self-efficacy (how much they feel in control of their lives), their mood (mindfulness when your upset or angry just isn't going to happen – not in ten minutes, anyway), and attitudes of friends to mindfulness (peer pressure is very powerful).

Some of these are factors that can't be designed for – beyond recognising that a mindfulness app isn't going to work for all people, or in all situations. Others can, perhaps, be designed for: such as managing people's expectations of what differences mindfulness might make in their lives, and giving guidance on when and how to fit in app use. What are some of the take-homes?
  • that incidental details (like the visual appearance or the sound of someone's voice) matter;
  • that people are one a 'journey' of learning how to practice mindfulness (don't force an expert to start at the beginning just because they haven't used this particular app before, for example);
  • that people need to learn how to fit app use and mindfulness into their lives, and expectations need to be managed; and
  • that engaging with the app isn't the same as engaging with mindfulness... but the one can be a great support for the other in the right circumstances.
 





Friday, 28 October 2016

Guidance on creating, evaluating and implementing effective digital healthcare interventions

This is an unconventional blog post – essentially, a place to index a set of papers. Last year, I participated in a workshop: ‘How to create, evaluate and implement effective digital healthcare interventions: development of guidance’. 
The workshop was led by Susan Michie, and resulted in a set of articles discussing key issues facing the development and evaluation of digital behaviour change interventions. There were about 50 participants, from a variety of countries and disciplines. And we all had to work ... on delivering interdisciplinary papers as well as on discussion. The outcome has just been published.
Credits: The workshop was hosted in London by the Medical Research Council, with funding from the Medical Research Council (MRC)/National Institute for Health Research (NIHR) Methodology Research Program, the NIH Office of Behavioral and Social Sciences Research (OBSSR)  and the Robert Wood Johnson Foundation.The workshop papers are being made publicly available with the agreement of the publishers of the American Journal of Preventive Medicine.

Thursday, 20 October 2016

If the user can't use it, it doesn't work: the invisible costs of bad software

This is a quick rant about unusable enterprise systems and turning visible costs into invisible costs. For an earlier, longer, discussions about different unusable systems, see my reviews of ResearchFish and an electronic healthcare system.

Yesterday, I was one of several people asked to use the Crown Commercial Services system to review some documents related to a bid for one of our public funding bodies. The use of this system is apparently mandated for that organisation.

I was sent instructions on how to do part of the process (which I could not have worked out from the user interface). I followed the instructions provided as far as they were relevant, and I then explored some more to try to locate the documents of the actual bids (which appeared to comprise 28 separate documents for eight bids). Then I tried to download them all in one file. 30 minutes later, the system timed out on me while still processing to create that file. When I logged back in I couldn’t locate the download window again without simply doing all the same actions a second time. And I ran out of time, energy or will to pursue this.

This is yet another example of a system where there is no evidence that the developers ever considered how the system would be used, by whom, under what circumstances, the learning curve to use it first time ... or anything else about the users. Susan Dray has a nice claim: "If the user can't use it, it doesn't work". This is yet another enterprise system that is absolutely not fit for purpose.

What this does is to shift costs from development (investing in making a system that is fit for purpose) to use (forcing every user of the system to waste time trying to achieve their goals despite the system). The former would be a visible cost to the developers and the people who commissioned the system while the latter is an invisible cost borne in all the stress and loss of productivity of the people who have to use the system. For the UK Research Evaluation Framework (REF), these invisible costs were estimated at almost £250 million. That was a one-off exercise; there should be a practice of estimating the annual costs of unusable enterprise systems. I'm pretty confident that the invisible costs would turn out to be significantly greater than the visible costs of creating a system that was fit for purpose in the first place. And we know how to do it. We have known how to do it for decades!

Wednesday, 17 August 2016

Reflections on two days in a smart home

I've just had the privilege of spending two days in the SPHERE smart home in Bristol. It has been an interesting experience, though much less personally challenging than I had expected. For example, it did not provoke the intensity of reaction from me that wearing a fitbit did. What have I learned? That a passive system that just absorbs data that can't be inspected or interacted with by the occupier quickly fades into the background, but that it demands huge trust of the occupant (because it is impossible to anticipate what others can learn about one's behaviour from data that one cannot see). And that as well as being non-threatening, technology has to have a meaningful value and benefit to the user.

Reading the advance information about staying in the SPHERE house, I was reassured that they have considered safety and privacy issues well. I wasn't sure what to expect of the wearable devices or how accurate they would be. My experience of wearing a fitbit previously had left me with low expectations of accuracy. I anticipated that wearing devices in the house might make me feel like a lab rat, and I was concerned about wearing anything outside the house. It turned out that the only wearable was on the wrist, and was only worn in the house anyway, so less obtrusive than commercial wearables.

I had no idea of what interaction mechanisms to expect: I expected to be able to review the data that is being gathered in real time an wondered whether I would be able to draw any inferences from that data? Wrong! The data was never available for inspection, because of the experimental status of the house at the moment.

When we arrived, it was immediately obvious that the house is heavily wired, but most of the technology is one-way (sucking information without giving anything back to the participant). Most of the rooms are quite sparse and magnolia. The dining room feels very high-tech, with wires and chips and stuff all over the place – more like a lab than a home. To me, this makes that room a very unwelcoming place to be, so we chose to eat dinner in the living room.

I was much more aware of the experimental aspects of the data gathering (logging our activities) than of the lifestyle (and related) monitoring. My housemate seemed to be quite distracted by the video recording for a while; I was less distracted by it than I had expected. The fact that I cannot inspect the data means that I have no option to reflect on it, so it quickly became invisible to me.
 
The data gathering that we did manually was meant to be defining the ‘ground truth’, but with the best will in the world I’m not sure how accurate the data we’ll provide was – we both keep forgetting to carry the phones everywhere with us, and kept forgetting to start new activities or finish completed one. Recording activities involves articulating the intention to do something (such as making a hot drink or putting shopping away) just before starting to do it, and then articulating that it has been finished when it’s over. This isn't natural! Conversely, at one point, I happened to put the phone on a bedside table and accidentally started logging "sleep" through the NFC tag!

By day 2, I was finding little things oppressive: the fact that the light in the toilet didn’t work and neither did the bedside lights; the lack of a mirror in the bedroom; the fact that everything is magnolia; and the trailing wires in several places around the house. I hadn't realised how important being "homely" was to me, and small touches like cute doorstops didn't deliver.

To my surprise, the room I found least private (even though it had no video) was the toilet: the room is so small and the repertoire of likely actions so limited that it felt as if the wearable was transmitting details that would be easily interpreted. I have no way of knowing whether this is correct (I suspect it is not).

At one point, the living room got very hot so I had to work out how to open the window; that was non-trivial and involved climbing on the sofa and the window sill to work out how it was secured. I wonder what that will look like as data, but at least we had fresh air! 

By the time we left, I was getting used to the ugliness of the technology, and even to the neutrality of the house colours. I had moved things around to make life easier – e.g., moving the telephone off my bedside table to make space for my water and phone (though having water next to the little PCB felt like an accident waiting to happen).

My housemate worked with the SPHERE team to visualize some data from three previous residents that showed that all three of them had eaten their dinners in the living room rather than the dining room. We both seemed to find this slightly amusing, but also affirming: other people are making the same decision as we did.

The main issue to me was that the ‘smart’ technology had no value to me as an inhabitant in the house in its current experimental state. And I would really expect to go beyond inspectability of data to interactivity before the value becomes apparent. Even then, I’m not sure whether the value is short- or long-term: is it about learning about health and behaviours in the home, or is it about real-time monitoring and alerting for health management? The long-term value will come with the latter; for the former, people might just want a rent-a-kit that allows them to learn about their behaviours and adapt them over maybe 2-3 months. But this is all in the future. The current home is a prototype to test what is technically possible. The team have paid a lot of attention to privacy and trust, but not much yet to value. That's going to be the next exciting challenge...