Sunday, 18 March 2018

Invisible work

I have been on strike for much of the past four weeks – at least notionally. The truth is more nuanced than that, because I don't actually want my students and other junior colleagues to be disadvantaged by this action. I am, after all, fighting for the future of university education: their future. Yet I do want senior management and the powerful people who make decisions about our work and our pensions to be aware of the strength of feeling, as well as the rational arguments, around the pensions issue.

There have been some excellent analyses of the problem, by academic experts from a range of disciplines. Here are some of my favorites (in no particular order):
As well as standing on picket lines, marching, discussing the issues around the strike, and not doing work that involves crossing said picket lines, I have continued to do a substantial amount of work. It has made me think more about the nature of invisible work. Bonnie Nardi and Yrjo Engestrom identify four kinds of invisible work:
  1. work done in invisible places, such as the highly skilled behind-the-scenes work of reference librarians; to this I would add most of the invisible work done by university staff, out of sight and out of hours.
  2. work defined as routine or manual that actually requires considerable problem solving and knowledge, such as the work of telephone operators; don't forget completing a ResearchFish submission or grappling with the Virtual Learning Environment or many other enterprise software systems.
  3. work done by invisible people such as domestics, (and sometimes Athena SWAN teams!)
  4. informal work processes that are not part of anybody’s job description but which are crucial for the collective functioning of the workplace, such as regular but open-ended meetings without a specific agenda, informal conversations, gossip, humor, storytelling.
The time for (4) has been sadly eroded over the years as demands and expectations have risen without corresponding rises in resourcing.

To these, I would add the invisible work that is invisible because it is apparently ineffectual. E.g., I wrote to our Provost about 12 days ago, but I have no evidence that it was read; it certainly hasn't been responded to in any visible way (reproduced below for the record).

The double-think required to simultaneously be on strike while also delivering on time-limited commitments to colleagues and students has forced me to also develop new approaches to revealing and hiding work. For example, 
  • I have started logging my own work hours so that the accumulated time is visible to me, and although I've been working way more than the hours set out in the Working Time Directive, I'm going to try to bring the time worked down to comply with that directive. This should help me say "no" more assertively in future. That's the theory, at least...
  • I have started saving emails as drafts so as not to send them "out of hours". There are 21 items in my email outbox as I type this; I'll look incredibly productive first thing on Monday morning!
And finally, I will make visible the letter I wrote to the Provost:

Thank you for this encouraging message last week. You are right that none of us takes strike action lightly. We all want to be doing and supporting excellent teaching, research and knowledge transfer, but we are extremely concerned about the proposed pension changes, and we have found no other way to be heard.

I’ve worked in universities since 1981 and this is the first time I have taken strike action. The decision to strike has been one of the harder decisions I have taken in my professional career, but I think the impact of the proposed pension changes on our junior colleagues (and hence on the future of universities) is unacceptable, and I am not persuaded that a DB scheme is unaffordable.

Please continue to work with the other university leaders to find an early resolution to this dispute. UCL isn’t just estates and financial surplus: as you say, it’s a community of world-leading, committed people who work really hard, and who merit an overall remuneration package that is reflective of that. That includes pensions that aren’t a stock market lottery for each individual.

I’d like to be in my office meeting PhD students and post-docs next Monday morning, and in a lecture theatre with my MSc students on Monday afternoon. Please do everything in your power to bring this dispute to a quick resolution so that there’s a real possibility that “normal service” can be resumed next week.

Sunday, 4 March 2018

How not to design the user experience: update 2018

In November 2014, I wrote a summary of my experience of entering research data in Researchfish. Since then, aspects of this system have improved: at least some of the most obvious bugs have been ironed out, and being able to link data from ORCID make one tedious aspect of the task (entering data about particular publications) significantly easier. So well done to the ResearchFish team on fixing those problems. It's a pity it's still not fit for purpose, despite the number of funders who are supporting (mandating) use of this system.

The system is still designed without any consideration of how people conceptualise their research outputs – or at least, not how I do. According to ResearchFish, it takes less than a lunchbreak to enter all the data. There are two problems with this:
1. Few academics that I know have the time to take a lunch break.
2. So far, today, it has taken me longer than that just to work out a strategy for completing this multi-dimensional task systematically. It's like 3-D Sudoku, but less fun.

Even for publications, it's a two-dimensional task: select publications (e.g., from ORCID) and select grants to which they apply. But if you just do this as stated, then you get many-to-many relationships, with every publication assigned to grants that it isn't associated with as well as one(s) it is. And yes, I have tested this. So you have to decide which grant you're going to focus on, then go through the list and add those... then go around the loop (add new publications > select ORCID > search > select publications > select grant) repeatedly for all grants. Maybe there's a faster way to do it, but I haven't discovered that yet. Oh: and if you make a mistake, there isn't an easy way to correct it, so there is probably over-reporting as well as under-reporting on many grants.
I'm still trying to guess what "author not available" means in the information about a publication. My strategy for working out which paper each line refers to has been to keep Google Scholar open in parallel and search for the titles there, because those make more sense to me.

In the section on reporting key findings of a grant, when you save the entry, it returns you to the same page. Why would you want to save multiple times, rather than just moving on to the next step? Why isn't there a 'next' option? And why, when you have said there is no update on a completed grant, does it still take you to the update page? What was the point of the question?

When you're within the context of one award, and you select publications, it shows all publications for all awards (until you explicit select the option to focus on this award). Why? I'm in a particular task context...

When you're in the context of an award where you are simply a team member, you can filter by publications you've added, or by publications linked to this award, but not by publications that you've added that are also linked to this award. Those are the ones that I know about, and the ones that I want to check / update.

Having taken a coffee break, I returned to the interface to discover I had been logged out. I don't actually know my login details because the first time I logged in this morning I did so via ORCID. That option isn't available on the login page that appears after time-out. This is further evidence of poor system testing and non-existent user testing.

I could go on, but life is too short. There is no evidence of the developers having considered either conceptual design or task structures. There is no evidence that the system has actually been tested by real users who have real data entry tasks and time constraints. I really cannot comprehend how so many funders can mandate the use of a system that is so poorly designed, other than because they have the power to do so.

Monday, 19 February 2018

Qualitative research comes of age

For a long, long time, qualitative research has felt like a "poor relation" to quantitative: so much more subjective, so much harder to generalise, so much more reliant on the skills of the researcher to deliver quality.

I'm delighted to see that it's becoming more mainstream - at least based on the evidence of a couple of recent publications that appear in the mainstream research literature and that simply report on how to report qualitative research well. One is in the medical literature, and the other in the psychology literature. The questions of what constitutes high quality qualitative research, and how to report it are ones we have grappled with, particularly when the findings of a study don't align well with the original aims because you discover that those aims were based on incorrect assumptions about the situation. I still get the sense that there is asymmetry in the situation: that qualitative researchers have to justify their methods to quantitative researchers much more forcefully than the converse. But this seems like progress nevertheless.

Quantitative tells you about outcomes, but gives little (or no) insight into causes or processes. To improve outcomes, you really need to understand causes too...

Friday, 16 February 2018

Learning from past incidents?

I've been thinking about incident reporting in healthcare, in terms of what we can learn about the design of medical devices, based on both what is theoretically possible and also what actual incident reports show us.

Incident reporting systems (e.g., NRLS) are a potential source of information about poor usability and poor utility of interactive medical devices. However, because the health care culture typically focuses on outcomes rather than processes, instances of sub-optimal use typically pass unremarked. There is growing concern that there is under-reporting of incidents, but little firm evidence on the scale of that under-reporting. One study that compared observed errors against reported incidents involving intravenous medications identified 55 rate deviation and medication errors in nine hours of observation; 48 such incidents had been reported through the hospital incident reporting system over the previous two years, suggesting a reporting rate of about 0.1%. Firth-Cozens et al investigated causes for low reporting rates, even when clinicians had identified errors or examples of poor care. All groups of participants “considered that minor, commonplace or unintentional mistakes, ‘genuine or honest’ errors, one-off errors, or ones for which a subordinate is ‘obviously sorry’ or has insight, need not be reported”. Examples reported by their participants included incidents involving infusion pumps: problems for which the design or protocols for use of the pumps were contributing factors. Even when incidents are reported, those reports might not deliver insights into what went wrong. In our recent study of incident reports involving home use of infusion pumps, we found that reports gave much greater insight into how people detected and recovered from the device not working than into what had caused the device not to work properly in the first place.

While incident reporting systems might be one source of information on poor design or use of interactive devices in healthcare, this is not a reliable route for identifying instances of poor design. Once an incident is reported it is important that the role of device design in contributing to the incident be properly considered, and not simply dismissed with the common response that the device “performed as designed” and that it was therefore a user error.

Friday, 7 April 2017

If the user can’t use it, it doesn’t work: focusing on buying and selling


"If the user can’t use it, it doesn’t work": This phrase, from Susan Dray, was originally addressed at system developers. It presupposes good understanding of who the intended users are and what their capabilities are. But the same applies in sales and procurement.

In hospital (and similar) contexts, this means that procurement processes need to take account of who the intended users of any new technology are. E.g., who are the intended users of new, wireless integrated glucometers or of new infusion pumps that need to have drug libraries installed, maintained... and also be used during routine clinical care? What training will they need? How will the new devices fit into (or disrupt) their workflow? Etc. If any of the intended users can’t use it then the technology doesn’t work.

I have just encountered an analogous situation with some friends. These friends are managing multiple clinical conditions (including Alzheimer’s, depression, the after-effects of a mini-stroke, and type II diabetes) but are nevertheless living life to the full and coping admirably. But recently they were sold a sophisticated “Agility 3” alarm system, comprising a box on the wall with multiple buttons and alerts, a wearable “personal attack alarm”, and two handheld controllers (as well as PIR sensors, a smoke alarm and more). They were persuaded that this would address all their personal safety and home security needs. I don’t know whether the salesperson referred directly or obliquely to any potential physical vulnerability. But actually their main vulnerability was that they no longer have the mental capacity to assess the claims of the salesperson, let alone the capacity to use any technology that is more sophisticated than an on/off switch. If the user can’t use it, it doesn’t work. By this definition, this alarm system doesn’t work. Caveat emptor, but selling a product that is meant to protect people when the net effect is to further expose their vulnerability is crass miss-selling. How ironic!

Wednesday, 15 March 2017

Safer Healthcare



I've just finished reading Safer Healthcare. For me, the main take-home message is the different kinds of safety that pertain to different situations. Vincent and Amalberti describe three different approaches to safety:
  • ultra-safe, avoiding risk, amenable to standardised practices and checklists. This applies to the areas of healthcare where it is possible to define (and follow) standardised procedures.
  • high-reliability, managing risks, which I understand as corresponding to "resilient" or "safety II" – empowering people within the system to learn and adapt. This seems to apply to a lot of healthcare, where the variabilities can't be eliminated, but can be managed.
  • ultra-adaptive, embracing risk. This relies on the skills and resilience of individuals. This applies to innovative techniques (the very first heart transplant, for example) where it really isn't possible to plan fully ahead of time because so much is unknown and it relies on the skills of the individual.
Image may contain: outdoorThe authors draw on the example of rock climbing. The safest forms of climbing (with a top-rope, which really does minimise the chances of hitting the ground from a fall) are in the first category; most climbing falls into the second: we manage risk by carefully following best practice while accepting that there are inherent risks; people more adventurous than me (and more skilled) push the boundaries of what is possible – both for themselves and for the community. But it is also possible to compromise safety, as graphically described by James McHaffie addressing Eve Lancashire whose attitude to safety worries him (see about half way through the post).

Vincent and Amalbeti's categorisation highlights why comparing healthcare with aviation in terms of safety is of limited value: commercial aviation is, in their terms, ultra-safe, with standardised procedures and a lot of barriers to risk; healthcare involves far too much variability to all be amenable to such an approach.

Another point Vincent and Amalberti make is that incidents / harm very often don't happen within one episode of care, but evolve over time. I am reminded of a similar point made in a very different context by Brown and Duguid, who described the way that photocopier engineers learn about their work (and the variability across machines and situations): the describe it as being like the "passage of the sun across the sky" – i.e., it's not really clear when it starts or end, or even exactly how it develops moment to moment. So many activities – and incidents – don't have a clear start and end. Possibly the main thing that distinguishes a reportable incident is that there is a point at which someone realises that something has gone wrong...

Sunday, 12 March 2017

Public health -- personal health



I've just re-read the Academy of Medical Sciences report "Improving the health of the public by 2040". It makes many insightful points, particularly about the need for multidisciplinary training to deliver future professionals who can work across disciplinary silos – whether within healthcare and medical disciplines or with other disciplines such as computing and other branches of engineering. Also, the likely importance of digital tools and "big data" in the future. It does, however, focus entirely on the population, apparently ignoring the fact that the population is made up of individuals, who each control their own health – at least to the extent that they can choose to comply (or adhere) with medical advice and can choose whether or not to share data about themselves. It seems to miss a big opportunity if we don't link the individual to the population because the health outcomes and practices of the population emerge from the individual behaviours of each person. Sure, the behaviours of individuals are shaped by population-level factors, but they aren't determined by them. It's surely time to link the individual and the population better.


This can be compared with the Wachter Review, which focused on the value of electronic health records and other digital technologies for delivering safer and more effective care. That review also highlighted the need for professionals with skills that cross information technologies and clinical expertise, but it also considers issues such as engagement and usability. It notes that "implementing health IT is one of the most complex adaptive changes in the history of healthcare". Without addressing the complexity (which is a consequence of the number of individuals, roles, organisations and cultures involved), it's going to be difficult to achieve population-level improvements – by 2040, or at any time.