Saturday, 7 February 2015

Designing: the details and the big picture

I was at a meeting this week discussing developments to the NHS Choices site. This site is an amazing resource, and the developers want to make it better, more user-centred. But it is huge, and has huge ambitions: to address a wide variety of health-related needs, and to be accessible by all.

But of course. we are not all the same: we have different levels of knowledge, different values, needs, and ways of engaging with our own health. Some love to measure and track performance (food intake, weight, blood pressure, exercise, sleep, mood: with wearable devices, the possibilities are growing all the time). Others prefer to just get on with life and react if necessary.

We don't all choose to consume news in the same way (we read different papers, track news through the internet, TV or radio, or maybe not at all); similarly, we don't all want health information in the same form or "voice". And it is almost impossible to consider all the nuanced details of the design of a site that is intended to address the health needs of "everyone" while also maintaining a consistent "big picture". Indeed, if one imagines considering every detail, the task would become overwhelmingly large. So some "good enough" decisions have to be made.

I am very struck by the contrast between this, as an example of interaction design where there is little resource available to look at details, and the course that my daughter is doing at the moment, which has included a focus on typographical design. In the course, they are reviewing fine details of the composition and layout of every character. Typography is a more mature discipline than interaction design, and arguably more tractable (it's about the graphics and the reading and the emotional response). I hope that one day interaction design will achieve this maturity, and that it will be possible to have the kind of mature discourse about both the big picture and the details of users, usability and fitness for purpose.


Tuesday, 20 January 2015

Designing, documenting, buying, using: the mind-boggling hob

I have complained before about how difficult some taps are to use. These should be simple interactive objects whose design requirements are well understood by now, and yet designers keep generating new designs that work less well than previous models. Why is there so much emphasis in unnecessary innovation, as if innovation is inherently a good thing?

Ursula Martin has just introduced me to the unusable hob:
"This bizarre thing requires you to select a ring with the rotating arrow before applying plus/minus.  Now here's a thing. Suppose you have switched on ring 1 (bottom right), and no others, set it to 4 (a red 4 appears due South of the Ring 1) and a few minutes later you decide you want to turn it down to 3. How do you do that? Press the minus sign, as that is the only ring that is on? Oh no, nothing happens if you do that. it appears that you HAVE TO CYCLE THROUGH ALL THE OTHER RINGS AND BACK TO 1, then red 4 will start to flash, and then the minus/plus signs will change it. Just imagine the hoopla of doing that when you have four rings going at once."

The instruction manual is full of information like:
"Each cooking zone is equipped with an auto-
matic warm-up function. When this is activa-
ted, then the given cooking zone is switched
on at full power for a time dpending on the heat
setting selected, and is then switched back to
the heat setting set.

Activate the automatic warm-up function by
setting the required heating power by touching
the (+) sensor (5) first. Then the heating level
„9” is displayed intermittently on the cooking
zone indicator (3) with the letter “A” for around
10 seconds."

And so on, for many pages (spelling mistakes an added bonus). This is a manual that opens with the (only slightly patronising):
"DEAR USER,
The plate is exceptionally easy to use and extremely efficient. After reading the instruction manual, operating the cooker will be easy."

Ursula notes that: "The designer seems to have a mythical cook in mind who doesn’t want to change the temperature very often". Alternatively, maybe it's from the Dilbert school of design. All one can be sure about is that the design team apparently never use a hob, and that the technical authors who have written the 28-page manual on how to operate this hob were happy to write out inscrutable instructions without ever seriously considering their comprehensibility. And had apparently also never used a hob.

Finally, Ursula reported that "the flat owner is very embarrassed about it - he has just had the kitchen redone and I am the first tenant since, and he hadn’t used the thing himself". If you've ever bought a new appliance and tried to assess its usability before purchase you will probably sympathise with the landlord. It's usually impossible to test these things out before buying; to even read the manual; or to get any reliable information from the sales team about usability. In fact, ease of use, usability and fitness for purpose don't feature prominently in our discourse.

We really do need a cultural shift such that fitness for purpose trumps innovation. Don't we?

Friday, 9 January 2015

Compliance, adherence, and quality of life

My father-in-law used to refuse presents on the principle that all he wanted was a "bucket full of good health". And that was something that no one is really in a position to give. Fortunately for him (and us!) he remained pretty healthy and active until his last few weeks. And this is true for many of us: that we have mercifully little experience of chronic ill health. But not everyone is so lucky.

My team has been privileged to work with people suffering from chronic kidney disease, and with their families, to better understand their experiences and their needs when managing their own care. Some people with serious kidney disease have a kidney transplant. Others have dialysis (which involves having the blood 'cleansed' every couple of days). There is widespread agreement amongst clinicians that it's best for people if they can do this at home. And the people we worked with (who are all successful users of dialysis technology at home) clearly agreed. They were less concerned, certainly in the way they talked with us, about their life expectancy than about the quality of their lives: their ability to go out (for meals, on holiday, etc.), to work, to be with their families, to feel well. Sometimes, that demanded compromise: some people reported adopting short-cuts, mainly to reduce the time that dialysis takes. And one had her dialysis machine set up on her verandah, so that she could dialyse in a pleasant place. Quality of life matters too.

The health literature often talks about "compliance" or "adherence", particularly in relation to people taking medication. There's the same concern with dialysis: that people should be dialysing according to an agreed schedule. And mostly, that seemed to be what people were doing. But sometimes they didn't because other values dominated. And sometimes they didn't because the technology didn't work as intended and they had to find ways to get things going again. Many of them had turned troubleshooting into an art! As more and more health management happens at home, which means that people are immediately and directly responsible for their own welfare, it seems likely that terms like "compliance" and "adherence" need to be re-thought to allow us all to talk about living as enjoyably and well as we can – with the conditions we have and the available means for managing those conditions. And (of course) the technology should be as easy to use and safe as possible. Our study is hopefully of interest: not just to those directly affected by kidney disease or caring or designing technology for managing it, but also for those thinking more broadly about policy on home care and how responsibility is shared between clinicians, patients and family.


Wednesday, 7 January 2015

Strategies for doing fieldwork for health technology design


The cartoons in this blog post are  from Fieldwork for Healthcare: Guidance for Investigating Human Factors in Computing Systems© 2015 Morgan and Claypool Publishers, www.morganclaypool.com. Used with permission.
One of the themes within CHI+MED has been better understanding how interactive medical devices are used in practice, recognising that there are often important differences between work as imagined and work as done.  This has meant working with many people directly involved in healthcare (clinicians, patients, relatives) to understand their work when interacting with medical devices: observing their interactions and interviewing them about their experiences. But doing fieldwork in hospitals and in people’s homes is challenging:
  • You need to get formal ethical clearance to conduct any study involving clinicians or patients. As I’ve noted previously, this can be time-consuming and frustrating. It also means that it can be difficult to change the study design once you discover that things aren’t quite the way you’d imagined, however much preparatory work you’d tried to do. 
  • Hospitals are populated by people from all walks of life, old and young, from many cultures and often in very vulnerable situations. They, their privacy and their confidentiality need to be respected at all times.
  • Staff are working under high pressure. Their work is part-planned, part-reactive, and the environment is complex: organisationally, physically, and professionally. The work is safety-critical, and there is a widespread culture of accountability and blame that can make people wary of being observed by outsiders.
  • Health is a caring profession and, for the vast majority of staff, technology use is a means to an end; the design of that technology is not of interest (beyond being a source of frustration in their work).
  • You’re always an ‘outsider’: not staff, not patient, not visitor, and that’s a role that it can be difficult to make sense of (both for yourself and for the people you’re working with).
  • Given the safety-critical nature of most technologies in healthcare, you can’t just prototype and test ‘in the wild’, so it can be difficult to work out how to improve practices through design.

When CHI+MED started, we couldn’t find many useful resources to guide us in designing and conducting studies, so we found ourselves ‘learning on the job’. And through discussions with others we realised that we were not alone: that other researchers had very similar experiences to ours, and that we could learn a lot from each other.

So we pooled expertise to develop resources to give future researchers a ‘leg up’ for planning and conducting studies. And we hope that the results are useful resources for future researchers:

  • We’ve recently published a journal paper that focuses on themes of gaining access; developing good relations with clinicians and patients; being outsiders in healthcare settings; and managing the cultural divide between technology human factors and clinical practice.
  • We’ve published two books on doing fieldwork in healthcare. The first volume reported the experiences of researchers through 12 case studies, covering experiences in hospitals and in people’s homes, in both developed and developing countries. The second volume presents guidance and advice on doing fieldwork in healthcare. The chapters cover ethical issues, preparing for the context and networking, developing a data collection plan, implementing a technology or practice, and thinking about impact.
  • Most of our work is neither pure ethnography nor pure Grounded Theory, but somewhere between the two in terms of both data gathering and analysis techniques: semi-structured, interpretivist, pragmatic. There isn’t an agreed name for this, but we’re calling them semi-structuredqualitative studies, and have written about them in these terms.

If you know of other useful resources, do please let us know!

Saturday, 27 December 2014

Positive usability: the digital and the physical

I complain quite a lot about poor usability: for example, of ResearchFish and electronic health records, so it's good to be able to celebrate good usability (or at least good user experience) too.

Last week, my car gained a puncture. On a Sunday. Not a good experience. But sorting out was as painless as I can imagine: it was quick to find a mobile tyre replacement service (etyres, in case anyone else suffers a similar fate), to identify a suitable tyre amongst a very large number of options and to fix a fitting time. All online (apart from the actual fitting, of course), and all clear and simple. It just worked.

I've had analogous experiences with some home deliveries recently: rather than the company leaving a note to say that they tried to deliver the parcel and it has been returned to the depot, and I can pick it up at my convenience (sigh!), they have notified me that it's ready and asked me to choose a delivery time that suits. All online; all easy.

Of course, neither tyre selection and fitting nor parcel delivery is as complex a task as data management of complex records. But it's delightful when the service is designed so that the digital and the physical fit together seamlessly, and digital technologies really deliver something better than could be achieved previously.

Friday, 21 November 2014

How not to design the user experience in electronic health records

Two weeks ago, I summarised my own experience of using a research reporting system. I know (from subsequent communications) that many other researchers shared my pain. And Muki Haklay pointed me at another blog on usability of enterprise software, which discusses how widespread this kind of experience is with many different kinds of software system.

Today, I've had another experience that I think it's worth reporting briefly. I had a health screening appointment with a nurse (I'll call her Naomi, but that's not her real name). I had to wait 50 minutes beyond the appointment time before I was seen. Naomi was charming and apologetic: she was struggling with the EMIS health record system, and every consultation was taking longer than scheduled. This was apparently only the second day that she had been using the health screening functions of the system. And she clearly thought that it was her own fault that she couldn't use it efficiently.

She was shifting between different screen displays more times than I could count. She had a hand-written checklist of all the items that needed to be covered in the screening, and was using a separate note (see right) to keep track of the measurements that she was taking. She kept apologising that this was only because the system was unfamiliar, and she was sure she'd be able to work without the checklist before long. But actually, checklists are widely considered helpful in healthcare. She was working systematically, but this was in spite of the user interactions with EMIS, which provided no support whatsoever for her tasks, and seemed positively obstructive at times. As far as I know, all the information Naomi entered into my health record was accurate, but I left her struggling with the final item: even though, as far as either of us could see, she had completed all the fields in the last form correctly, the system wasn't letting her save it, blocking it with a claim that a field (unspecified) had not been completed. Naomi was about to seek help from a colleague as I left. I don't know what the record will eventually contain about my smoking habits!

This is just one small snapshot of users' experience with another system that is not fit for purpose. Things like this are happening in healthcare facilities all over the world every day of the week. The clinical staff are expected to improvise and act as the 'glue' between systems that have clearly been implemented with minimal awareness of how they will actually be used. This detracts from both the clinicians' and the patients' experiences, and if all the wasted time were costed it would probably come to billions of £/$/€/ currency-of-your-choice. Electronic health records clearly have the potential to offer many capabilities that paper records could not, but they could be so, so much better than they are if only they were designed with their users and purposes in mind.

Wednesday, 5 November 2014

How not to design the user experience


Thank you to ResearchFish, a system that many UK researchers are required to use to report their research outcomes, for providing a rich set of examples of user experience bloopers. Enjoy! Or commiserate...

* The ‘opening the box’ experience: forget any idea that people have goals when using the system (in my case, to enter information about publications and other achievements based on recent grants held): just present people with an array of options, most of them irrelevant. If possible, hide the relevant options behind obscure labels, in the middle of several irrelevant options. Ensure that there is no clear semantic groupings of items. The more chaotic and confused the interaction, the more of an adventure it’ll be for the user.

* The conceptual model: introduce neat ideas that have the user guessing: what’s the difference between a team member and a delegate? What exactly is a portfolio, and what does it consist of? Users love guesswork when they’re trying to get a job done.

* Leave some things in an uncertain state. In the case of ResearchFish, some data had been migrated from a previous system. For this data, paper titles seem to have been truncated and many entries only have one page number. For example. Data quality is merely something to aspire to.

* Leave in a few basic bugs. For example, there was a point when I was allegedly looking at page 6 of 8, but there was only a ‘navigate back’ button: how would I look at page 7 of 8? [I don’t believe page 7 existed, but why did it claim that there were 8 pages?]

* If ‘slow food’ is good, then surely ‘slow computing’ must be too: every page refresh takes 6-8 seconds. Imagine that you are wading through treacle.

* Make it impossible to edit some data items. Even better: make it apparently a random subset of all possible data items.

* Don’t present data in a way that makes it clear what’s in the system and what might be missing. That would make things far too routine. Give the system the apparent structure of a haystack: some superficial structure on the outside, but pretty random when you look closer.

* Thomas Green proposed a notion of ‘viscosity’ of a system: make something that is conceptually simple difficult to do in practice. One form of viscosity is ‘repetition viscosity’. Bulk uploads? No way! People will enjoy adding entries one by one. One option for adding a publication entry is by DOI. So the cycle of activity is: identify a paper to be added; go to some other system (e.g., crossref); find the intended paper there; copy the DOI; return to ResearchFish; paste the DOI. Repeat. Repeatedly.

* Of course, some people may prefer to add entries in a different way. So add lots of other (equally tedious) alternative ways of adding entries. Keep the user guessing as to which will be the fastest for any given data type.

* Make people repeat work that’s already been done elsewhere. All my publications are (fairly reliably) recorded in Google Scholar and in the UCL IRIS system. I resorted at one point to accessing crossref, which isn’t exactly easy to use itself. So I was using one unusable system in order to populate another unusable system when all the information could be easily accessed via various other systems.

* Use meaningless codes where names would be far too obvious. Every publication has to be assigned to a grant by number. I don’t remember the number of every grant I have ever held. So I had to open another window to EPSRC Grants on the Web (GOW) in order to find out which grant number corresponds to which grant (as I think about it). For a while, I was working from four windows in parallel: scholar, crossref, GOW and ResearchFish. Later, I printed out the GOW page so that I could annotate it by hand to keep track of what I had done and what remained to be done. See Exhibit A.

* Tax the user’s memory: I am apparently required to submit entries for grants going back to 2006. I’ve had to start up an old computer to even find the files from those projects. And yes, the files include final reports that were submitted at the time. But now I’m expected to remember it all.

* Behave as if the task is trivial. The submission period is 4 weeks. A reminder to submit was sent out after two weeks. As if filling in this data is trivial. I reckon I have at least 500 outputs of one kind or another. Let’s say 10 minutes per output (to find and enter all the data and wait for the system response). Plus another 30-60 minutes per grant for entering narrative data. Yay: two weeks of my life when I am expected to teach, do research, manage projects, etc. etc. too.  And that assumes that it’s easy to organize the work, which it is not. So add at least another week to that estimate.

* Offer irrelevant error messages: At one point, I tried doing a Scopus search to find my own publications to enter them. Awful! When I tried selecting one and adding it to my portfolio, the response was "You are not authorized to access this page.” Oh: that was because I had a break between doing the search and selecting the entry, so my access had timed out. Why didn’t it say that?!? 

* Prioritise an inappropriate notion of security over usability: the security risk of leaving the page unattended was infinitesimal, while the frustration of time-out and lost data is significant, and yet researchfish timed out in the time it took to get a cup of coffee. I suspect the clock on the researchfish page may have been running all the time I was using the Scopus tool, but I'm not sure about that, and if I'm right then that's a crazy, crazy was to design the system. This kind of timeout is a great way of annoying users.

* Minimise organization of data: At one point, I had successfully found and added a few entries from Scopus, but also selected a lot of duplicate entries that are already in Researchfish. There is no way to tell which have already been added. And every time the user tries to tackle the challenge a different way it’s like starting again because every resource is organized differently. I have no idea how I would do a systematic check of completeness of reporting. This is what computers should be good at; it’s unwise to expect people to do it well.

* Sharing the task across a team? Another challenge. Everything is organised by principal investigator. You can add team members, but then who knows what everyone else is doing? If three co-authors are all able to enter data, they may all try. Only one will succeed, but why waste one person’s time when you can waste that of three (or more) people?

* Hide the most likely option. There are several drop-down menus where the default answer would be Great Britain / UK, but menu items are alphabetically ordered, so you have to scroll down to state the basically-obvious. And don’t over-shoot, or you have to scroll back up again. What a waste of time! There are other menus where the possible dates start at 1940: for reporting about projects going back to 2006. That means scrolling through over 60 years to find the most probable response.

* Assume user knowledge and avoid using forcing functions: at one point, I believed that I had submitted data for one funding body; the screen icon changed from “submit” to “resubmit” to reinforce this belief. But later someone told me there was a minimum data set that had to be submitted and I knew I had omitted some of that. So I went back and entered it. And hey presto: an email acknowledgement of submission. So I hadn’t actually submitted previously despite what the display showed. The system had let me apparently-submit without blocking that. But it wasn’t a real submission. And even now, I'm not sure that I have really submitted all the data since there is still an on-screen message telling me that it's still pending on the login page.

*Do not support multi-tasking. When I submitted data for one funding body, I wanted to get on with entering data for the next. But no: the system had to "process the submission" first. I cannot work while the computer system is working.

* Entice with inconsistency. See the order of the buttons for each of several grants (left). Every set is different. There are only 6 possible permutations of the three links under each grant heading; 5 of them are shown here. It must take a special effort to program the system to be so creative. Seriously: what does this inconsistency say about the software engineering of the system?
 
* Add enticing unpredictability. Spot the difference between the two screen shots on the right. And then take a guess at what I did to cause that change. It took me several attempts to work it out myself.

I started out trying to make this blog post light-hearted, maybe even amusing. But these problems are just scratching the surface of a system that is fundamentally unusable. Ultimately I find it really depressing that such systems are being developed and shipped in the 21st Century. What will it take to get the fundamentals of software engineering and user experience embedded in systems development?