Tuesday, 26 January 2016

The lifecourse and digital health

I've just been away for the weekend with a group of people of varying ages. Over breakfast, I was chatting with Diane (names have been changed), who surmised that she was the oldest person there. I looked quizzical: surely she's in her 70s and Edna is in her late 80s? But no: apparently, Diane is 88, and thinks that Edna is only 86. Appearances can be deceptive. Diane has a few health niggles (eyesight not as good as it once was, hip occasionally twinges) but she remains fit and active, physically and mentally. I hope I will age as well.

Meanwhile, last week I was at an Alan Turing Institute workshop on "Opportunities and Challenges for Data Intensive Healthcare". The starting point was that data sciences have always played a key role in healthcare provision and deployment of preventative interventions, and that we need novel mathematical and computational techniques to exploit the vast quantities of health and lifestyle data that are now being generated. Better computation is needed to deliver better health management and healthcare at lower cost. And of course people also need to be much more engaged in their own care for care provision to be sustainable.

There was widespread agreement at the meeting that healthcare delivery is in crisis, with rising costs and rising demands, and that there is a need for radical restructuring and rethinking. For me, one of the more telling points made (by a clinician) is that significant resources are expended to little good effect in the interests of keeping people alive, when perhaps they should be left to die peacefully. The phrase used was "torturing people to death". I don't imagine many of us want to die in intensive care or in an operating theatre. Health professionals could use better data analytics to make more informed decisions about when "caring" means intervening and when it means stepping back and letting nature take its course.

In principle, better data, better data analysis, and better personalised health information should help us all to be better manage our own health and wellbeing – not taking over our lives, but enabling us to live our lives to the full. My father-in-law's favorite phrase was "I'd like a bucket full of health please". But there's no suggestion that any of us will (or wants to) live forever. At the meeting, someone suggested that we should be aiming for the "Duracell bunny" approach to live: live well, live long, die quickly. Of course, that won't be possible for everyone (and different people have different conceptions of what it means to "live well").

This presents a real challenge for digital health and for society: to re-think how each and every one of us lives the best life we can, supported by appropriate technology. There's a widespread view that "data saves lives"; let's also try to ensure that the saved lives are worth living!

Tuesday, 8 September 2015

Experiences of home haemodialysis

An open letter to study participants:

You generously shared your experiences of managing your care on home haemodialysis (HHD), and of using the dialysis machine. You were one of 19 patients and families, or members of care teams, across four NHS Trusts took part in the study. There were lots of common themes in what you told us:
  • about the challenges of learning to do dialysis in the first place (and how scary it was in the first few weeks at home), but how it became routine (“like driving a car”) over time;
  • about the challenges of troubleshooting, particularly when the problem was an unfamiliar one;
  • about the ways your home care team support you – and might be able to support you better if you had more seamless data exchange with your team;
  • about the common difficulties, such as clearing bubbles from the system and remembering to open and close all clamps at the right time; and
  • about the various strategies you have discovered for keeping yourselves safe.

The project has now finished, and we’ve been reporting the findings as widely as possible:
We hope that this project can ‘make visible’ your experiences and practices in managing care at home, and that this will help manufacturers to design next-generation systems, and nephrology services in planning effective home care support.

Thank you to everyone (patients, carers and professionals) who made the study possible.

Monday, 31 August 2015

The Digital Doctor

I’ve just finished reading The DigitalDoctor by Robert Wachter. It’s published this year, and gives great insight into the US developments in electronic health records, particularly over the past few years: Meaningful Use and the rise of EPIC. The book manages to steer a great course between being personal (about Wachter’s career and the experiences of people around him) and drawing out general themes, albeit from a US perspective. I’d love to see an equivalent book about the UK, but suspect there would be no-one qualified to write it.

The book is simultaneously fantastic and slightly frustrating. I'll deal with the frustrating first: although Wachter claims that a lot of the book is about usability (and indeed there are engaging and powerful examples of poor usability that have resulted in untoward incidents), he seems unaware that there’s an entire discipline devoted to understanding human factors and usability, and that people with that expertise could contribute to the debate: my frustration is not with Wachter, but with the fact that human factors is apparently still so invisible, and there still seems to be an assumption that the only qualification that is needed to be an expert in human factors is to be a human.

The core example (the overdose of a teenage patient with 38.5 times the intended dose of a common antibiotic) is told compellingly from the perspectives of several of the protagonists:

    poor interface design leads to the doctor specifying the dose in mg, but the system defaulting to mg/kg and therefore multiplying the intended dose by the weight of the patient;

    the system issues so many indistinguishable alerts (most very minor) that the staff become habituated to cancelling them without much thought – and one of the reasons for so many alerts is the EHR supplier covering themselves against liability for error;

    the pharmacist who checked the order was overloaded and multitasking, using an overly complicated interface, and trusted the doctor;

    the robot that issued the medication had no ‘common sense’ and did not query the order;

    the nurse who administered the medication was new and didn’t have anyone more senior to quickly check the prescription with, so assumed that all the earlier checks would have caught any error, so the order must be correct;

    the patient was expecting a lot of medication, so didn’t query how much “a lot” ought to be.
This is about design and culture. There is surprisingly little about safer design from the outset (it’s hardly as if “alert fatigue” is a new phenomenon, or as if the user interface design and confusability of units is surprising or new): while those involved in deploying new technology in healthcare should be able to learn from their own mistakes, there’s surely also room for learning from the mistakes (and the expertise!) of others.

The book covers a lot of other territory: from the potential for big data analytics to transform healthcare to the changing role of the patient (and the evolving clinician–patient relationship) and the cultural context within which all the changes are taking place. I hope that Wachter’s concluding optimism is well founded. It’s going to be a long, hard road from here to there that will require a significant cultural shift in healthcare, and across society. This book really brought home to me some of the limitations of “user centred design” in a world that is trying to achieve such transformational change in such a short period of time, with everyone having to just muddle through. This book should be read by everyone involved in the procurement and deployment of new electronic health record systems, and by their patients too... and of course by healthcare policy makers: we can all learn from the successes and struggles of the US health system.

Sunday, 30 August 2015

On rigour, numbers and discovery

Recently, I was asked the following:

In talking to my psychology seminar group about their qualitative lab I ended up looking at Helene Joffe’s book chapter on thematic analysis.  She suggests including diagrammatic representations of the themes, together with quantitative data about how many participants mentioned the theme, and it’s subparts.  This appealed to the psychology students because it gives them quantitative data and helped them see how prevalent that theme was within the sample.

And then today I saw another paper “Supporting thinking on sample sizes for thematic analyses: a quantitative tool".  It argues that one should consider the power of the study when deciding on sample size – another concept I’d only seen in quantitative research. 

Both of these sources seem to be conducting qualitative analysis with at least a nod towards some of the benefits of quantitative data, which appears to make qualitative analysis have more rigor.  Of course, simply adding numbers doesn’t necessarily make something more rigorous but it does add more information to results of an analysis and this could influence the reader’s perception of the quality of the research.  However, I don’t recall seeing this is any HCI papers.  Why isn’t it used more often? 

The answer (or at least, my answer) hinges on nuances of research tradition that are not often discussed explicitly, at least in HCI:

Joffe, Fugard and Potts are all thinking and working in a positivist tradition that assumes an independent reality ‘out there’, that doesn’t take into account the role of the individual researcher in making sense of the data. Numbers are great when they are meaningful, but they can hide a lot of important complexity. For example in our study of people’s experience of home haemodialysis, we could report how many of the participants had a carer and how many had a helper. That’s a couple of numbers. But the really interesting understanding comes in how those people (whether trained as a carer or just acting as a helper) work with the patient to manage home haemodialysis, and how that impacts on their sense of being in control, how they stay safe, their experience of being on dialysis, and the implications for the design of both the technology and the broader system of care. Similarly, we could report how many of their participants reported feeling scared in the first weeks of dialysis, but that didn’t get at why they felt scared or how they got through that stage. We could now run a different kind of study to tease out the factors that contribute to people being scared (having established the phenomenon) and put numbers on them, but to get the larger (60-80) participants needed for this kind of analysis would involve scouring the entire country for willing HHD participants and getting permission to conduct the study from every NHS Trust separately; I’d say that’s a very high cost for a low return.

Numbers don’t give you explanatory power and they don’t give you insights into the design of future technology. You need an exploratory study to identify issues; then a quantitative analysis can give the scale of the problem, but it doesn’t give you insight into how to solve the problem. For HCI studies, most people are more interested in understanding the problem for design than in doing the basic science that’s closer to hypothesis testing. Neither is right or wrong, but they have different motivations and philosophical bases. And as Gray and Salzman argued, many years ago, using numbers to compare features that are not strictly comparable – in their case, features of different usability methods when used in practice – is 'damaged' (and potentially damaging).

Wolcott (p.36) quotes a biologist, Paul Weiss, as claiming, “Nobody who followed the scientific method ever discovered anything interesting.” The quantitative approach to thematic analysis doesn’t allow me to answer many of the questions I find interesting, so I’m not going to shift in that direction just to do studies that others consider more rigorous. Understanding the prevalence of phenomena is important, but so is understanding the phenomena, and the techniques you need for understanding aren’t always compatible with those you need for measuring prevalence. Unfortunately!

Saturday, 22 August 2015

Innovation for innovation's sake?

As Director of the UCL Institute of Digital Health, my job is to envision the future. The future is fueled by innovation and vision. And there's plenty of that around. But the reality is much more challenging: as summarised in a recent blog post, most people aren't that interested in engaging with their health data (the ones who are most likely to be tracking their data are young, fit and wealthy), and most clinicians are struggling to even do their basic (reactive) jobs, without having much chance to think about the preventative (proactive) steps they might be taking to help people manage their health.

Why might this be? Innovation is creative and fun. It's also essential (without it, we'd still be wallowing around in the primordial soup). But there's a tendency for innovation to assume a world that is simpler than the real world: people who are engaged and compliant and have time to take up the innovation. Innovation tends not to engage with the inconvenient truths of real life, or to tackle the difficult and complex challenges that get in the way of simple visions.

We need a new approach to innovation: one that takes the really difficult challenges seriously, that accepts that the rate of progress may be slow, that recognises that it's much harder to change people and cultural practices than it is to change technology, but that these all need to be aligned for innovation to really work.

We need innovation that works with and for people. And we need to recognise that an important part of innovation is dealing with the inconvenient and difficult problems that seem to beset healthcare delivery, in all its forms.

Sunday, 24 May 2015

Digital Health: tensions between hype and reality

There are many articles predicting an amazing future for digital technologies for healthcare: wearables, implantables, wirelessly enabled to gather vital signs information and collate it for future sensemaking, by the individual, clinicians and population health researchers. Two examples that have recently come to my attention are a video by the American Psychiatry Association and a report by Deloitte. The possibilities are truly transformational.

Meanwhile, I recently visited a friend who has Type II diabetes. On his floor, half hidden by a table, I spotted what I thought was a pen lid. It turned out to be the top of his new lancing kit. Although he had been doing blood glucose checks daily for well over a decade, he hadn't done one for over two weeks. Not just because he'd lost an essential part of the equipment, but because he'd been prescribed the new tool and hadn't been able to work out how to use it. So losing part of it wasn't a big deal: it was useless to him anyway. He told me that when he'd reported his difficulties to his clinician, he'd... been prescribed a second issue of exactly the same equipment. So now he has three sets of equipment: the original (AccuChek) lancing device and blood glucose meter, which he has used successfully for many years, but which he can't use now because he doesn't have spare consumables; and two lancing devices and meters (from a different manufacturer), with plenty of spare consumables, which he can't use because he finds the lancing device too difficult to use. And in trying to work out with him what the problem was, we managed to break one of them. Good thing he's got a spare!

If we think it's just isolated individuals who struggle, it's not: a recent report from Forbes reports similar issues at scale: poor usability of electronic health records and patient portals that are making work less efficient and effective rather than more.

So on the one hand we have the excitement of future possibilities that are gradually becoming reality for many people, and on the other hand we have the lived experiences of individuals. And for some people, change is not necessarily good. The real challenge is to design a future that can benefit all, not just the most technology-savvy in society.

Saturday, 7 February 2015

Designing: the details and the big picture

I was at a meeting this week discussing developments to the NHS Choices site. This site is an amazing resource, and the developers want to make it better, more user-centred. But it is huge, and has huge ambitions: to address a wide variety of health-related needs, and to be accessible by all.

But of course. we are not all the same: we have different levels of knowledge, different values, needs, and ways of engaging with our own health. Some love to measure and track performance (food intake, weight, blood pressure, exercise, sleep, mood: with wearable devices, the possibilities are growing all the time). Others prefer to just get on with life and react if necessary.

We don't all choose to consume news in the same way (we read different papers, track news through the internet, TV or radio, or maybe not at all); similarly, we don't all want health information in the same form or "voice". And it is almost impossible to consider all the nuanced details of the design of a site that is intended to address the health needs of "everyone" while also maintaining a consistent "big picture". Indeed, if one imagines considering every detail, the task would become overwhelmingly large. So some "good enough" decisions have to be made.

I am very struck by the contrast between this, as an example of interaction design where there is little resource available to look at details, and the course that my daughter is doing at the moment, which has included a focus on typographical design. In the course, they are reviewing fine details of the composition and layout of every character. Typography is a more mature discipline than interaction design, and arguably more tractable (it's about the graphics and the reading and the emotional response). I hope that one day interaction design will achieve this maturity, and that it will be possible to have the kind of mature discourse about both the big picture and the details of users, usability and fitness for purpose.