In January, EI Netherlands Branch member Arend van Campen posted a thought provoking piece in response to the publication of EI Guidance on quantified human reliability analysis (QHRA) (see “Can humans be quantified?”).
Author of Guidance on quantified human reliability analysis (QHRA), Jamie Henderson, has written a response clarifying the purpose of the guide:
“Arend van Campen’s response raises some important points – issues for which we have a deal of sympathy. One of the reasons for writing the guidance was that human reliability analyses (HRAs) are often undertaken without a proper understanding of the context in which people work, and of the limitations of the available techniques and data. There are several potential dangers with this, not least complacency that human factors issues are being adequately managed when they are not.
Thirty years ago, when these techniques (e.g. THERP, HEART) were first being developed, the prevailing approach to engineering risk analysis was, and to some extent still is, primarily deterministic. When people were considered, if at all, it would typically be as components in a system, that reduces people “…to the same level as if he were a valve or pump that can be tested on reliability and sent back to the manufacturer if it does not work properly”. Now, for many reasons, some of which are set out in Arend’s response, this approach is lacking. However, at the time, despite an increasing recognition of the role that people play in ensuring safety, human factors was still finding its feet as a discipline and needed systematic ways of ensuring that human factors issues were considered. Understandably, these attempts focused on developing tools and techniques that could be integrated with existing approaches to engineering risk management.
In the intervening years, individuals working in this area have made many criticisms of these techniques (for example, the basic concept of human error has been challenged by numerous authors) and sought to develop new approaches to understanding why systems succeed or fail. One well-known example is research into high reliability organisations (HROs) which, instead of failure, focuses on identifying the characteristics of organisations that appear to manage safety in high-hazard environments particularly well. Resilience engineering, another relatively recent development, seeks to create flexible, robust processes in the face of real world complexity (e.g. responding to resource issues, revising risk models as situations change). The issues raised in Arend van Campen’s response (e.g. HSE goals, trust, motivation) are also important factors in the ability of an organisation to manage safety and risk.
However, despite the known issues with their application, and the development of new approaches to understanding why systems succeed or fail, these HRA techniques are still used, often by people without a background in human factors, and without a realistic understanding of the operating context in which tasks are performed. Until new approaches are developed, operationalised and tested, the existing techniques, which are after all designed to work within the context of an engineering risk analysis, will continue to be used. Therefore, the aim of the guidance, and the supporting article, was not to endorse these techniques, but to ensure that anyone considering using them, in particular individuals without a background in human factors, understands their limitations.”
We thank both Arend and Jamie for their contributions.
Perhaps something to add is to consider not just how QHRA should be done (if done at all), but why?
Is QHRA being used simply to justify the safety measures we have put in place? Or is it being used to better understand the tasks people are expected to perform, in order to improve the measures in place?