The November 2012 edition of Petroleum Review featured an article on EI Guidance on quantified human reliability analysis (QHRA) (both the Petroleum Review article and publication are available for download from the link).
QHRA can be a contentious topic, and the article solicited a response from EI Netherlands branch committee member Arend van Campen. Arend is a Business Ethicist and writes:
“When I read the article ‘Quantifying human reliability in risk assessments’, I was flabbergasted. In the article ‘man’ (you-me) was declared ‘unreliable’, meaning: undependable, untrustworthy, irresponsible, etc.
QHRA reduces people to a quantifiable risk statistic, as if people were machines. The philosophical question ‘are people and their conduct indeed quantifiable?’ rose to mind and a second, ethical question: would morality, or virtuous conduct, become obsolete if we could quantify people’s reliability?
In a number of abbreviations and statistical charts, people seem to be reduced to an unreliable entity and we had better know these faults before we employ someone or have him or her operate expensive machines or installations!
But to answer why and to whom this article and the publication it discusses is intended, some first questions need to be answered to analyse: who is at risk?
- Is it the corporation and its management at risk?
- Is this programme intended to control and minimize liability exposure or to prevent reputation damage?
- Is the checklist designed to make the person whose reliability was analysed safer?
- Can one expect that by creating checklists on human reliability, human behaviour can be controlled and risks reduced?
- Does the checklist allow for ethical, moral and social input?
At the Oil Terminal 2012 conference I gave a master class on HSE and operational profit. It was not aimed at technical aspects such as wearing a hard hat, steel toe boots, etc. but at the need to revise thinking about safety by asking the correct questions. These questions were not found in the new EI publication as they are founded on corporate social responsibility policies and codes of ethics. The lessons were aimed at human safety, happiness, trustworthiness, joy, because these human factors are unfortunately overlooked, but do form the fundamentals of human reliability. Only when we address these simple human needs, we can reduce people’s unreliability, but not by reducing management and personnel, share or stakeholders to mere statistical and unreliable risk factors.
- What are true HSE goals of the corporation and the individual?
- Can a corporation and its management be trusted?
- Measure human motivation – by investing in concern and compassion.
- Look at social cohesion – at home and at work.
- Prime basic principle: trustworthiness.
- Golden rule principle: Do not do to others what you would not like others to do to you.
- If it is not true, don’t say it. If it is not right, don’t do it.
These simple points / questions would be a good start, whilst in a second stage the following questions are to be asked:
- Is humanity itself the product of rules and regulations?
- Does everyone realize that rules or actions must be based on the universal law that humanity is the product and cannot be the means?
- The most important rhetorical question: would you like to live in a world where wrong actions are followed?
- Is everyone aware that the principle for human action is free will, which gives them a moral value?
- Are guidelines and regulations based on good will?
- Are they based on duty or out of inclination? (Deontology)
- Is everyone aware that moral action becomes an objective necessity of obeying this as a duty, which comes before legality of action?
When addressing safety, people should not be reduced to the same level as if he were a valve or a pump that can be tested on reliability and sent back to the manufacturer if it does not work properly.
Trust, justice, respect, enjoyment, happiness are the key factors, but they are unfortunately often overlooked by technicians who believe in technical control – including of people – by mechanistic tools based on physics, mathematics, economics and statistics. This is an error, but can be balanced by allowing philosophy, in the form of ethics and logic, back into industry.
Just start with the first question before all else: ‘is it good or bad?’”
We thank Arend for his contribution to this debate.
What do you think? Is QHRA a useful technique for understanding risk, or is it fundamentally flawed?