Dictionaries define ergonomics as a scientific discipline that uses principles of biotechnology and engineering to make products more comfortable and friendly for consumers. But ergonomics isn't just about design. It also factors in how we use things. This article discusses functional basis - the ergonomics pillar in the mega-project engineering.
Functional Basis (FB) compliments Design Basis I discussed previously.
Project engineering prefers to keep the customers ignorant about the mere existence of FB as it may add substantially to the project budget matching Design Basis.
Three factors kindle an interest to FB.
First is Industry 4.0 focus on digitization, automation and industrial internet of things which cannot be described without using the "functionality" word.
Second is growing complexity of the mega-projects, speared off by the technological breakthroughs and rising number of stakeholders (in PPP projects) to mitigate the risk of failure. (We use the "complexity" word to indicate the gap between needed expertise and de-facto to design, operate and to communicate.) The human ability to grasp complex projects and systems has its limits.
Third is the report of the European Agency for Safety and Health at Work (EU-OSHA) classifying wide-spread poor HMI designs as a serious emerging risk (mainly due to increased levels of mental strain and stress).
In a nutshell, FB is about human-system interaction (HSI). It includes Human-Machine Interface. HMI is part of the plant control system through which personnel monitor the plant operation, perform routine procedural tasks and intervene if something goes awry.
The HSI scope may be defined by the following questions.
- Are the plant functionality and operation modes documented?
- Does the plant systems selection match the requested functionality?
- Is the plant operation automated and remotely controlled?
- What are the systems level of automation?
- Do the systems have identical level of reliability and redundancy?
- Does the plant operation require the human involvement?
- Are the requirements for HSI documented?
- Are the operator tasks (in terms of cognitive activities) documented?
- Are there procedures for post-task analysis and modifications tracking?
- Does HSI address situational awareness?
- Does HSI detail risk-important human actions?
- Does HSI hide the systems complexity? (How friendly HSI is?)
- Does HSI address human and system limitations?
- Does HSI address possible hidden hazards?
- Does this interface match Human Factors Engineering standards?
- Does HSI support learning how to run the system?
The risk-important actions are those with the greatest risk contribution in comparison to all other risk contributors. These actions target the plant safety, the performance efficiency or product quality being inessential.
Situational awareness targets the difference between the operator's understanding of the plant's condition and its actual condition represented by HMI.
HSI design is driven by top-down approach. It means that the same 16 questions shall be applied to each and every subsystem of the plant. The number of systems in mega-projects varies between 20 and 40. Just imagine the quantity of work to be done.
We may say that HSI is centered on a subsystem. As it has a singular function described by a verb (to pump, to chlorinate, to filter, etc.), it is often called a functional module.
In other words, to establish the best possible interactions between the machine and operator, we need to enforce the modular design approach. It shall be common grounds for Control Design Philosophy as well. (Today this document is about technicalities and not operator-centered.) As it is not a current practice, the smart HSI is still a dream to come true.
First step in building Human-System Interface for any contractor is to decide if automation is a product or not. The prevailing answer is negative. Often even in big companies instrumentation and control engineers are not on the staff. Naturally, negotiation of the plant automation level is not-talked-about topic in striking the deal.
In the non-digital past the project engineering rested on the assumption that there is final cost-effective level of automation. In other words, semi-automation was considered forever. It is gone – low function allocation "ratio" between human and system is the best product today. So better avoid these types of questions in discussing automation with customers.
- Is automation preferred as a result of ……?
- Is automation necessary due to operator limitations?
- Is automation technically feasible?
- Is automation cost effective?
- Is manual operation preferable due to …?
The next step is to build the framework for describing and analyzing the plant/module control system capabilities and limitations.
- Is the module function critical?
- When is the module function needed?
- What is the module availability?
- Is there need for control redundancy?
- What are the module HMI indicators?
- What is the plant tolerance for module-less operation?
- Are the module all modes of operation (automatic, shared, manual) equally safe?
- How is module resilient to human errors?
- Does the module fail safe?
The customer is entitled to ask if the control system covers routine tests of alarms and interlocks, integrity tests of pressurized equipment like RO pressure vessels or ultrafiltration systems. Is alarm priority, urgency of response, and allowable latency documented? Are the OEM instructions on safe operation limits acknowledged and built into the control system? Most probably, not.
The answers may quickly reveal "silos" and "black holes" in automation design – disruptors of coherent information streams. Manually collected information patches distort the operator's perception of the system current status and lead to errors.
Classic example of silo is third party's subsystem equipped with local control panel. Such systems increase chances of human errors as they fall out of dominant control design standards.
Step three is to move to the operator – centered philosophy (instead of control one).
What do we expect from the operator? To cover the design deficiencies? Or to provide priceless feedback on operating experience by making the operator role logical, coherent and meaningful?
Modern control design leaves plenty of room for the operator's experience to grow.
Let's look at the operator response to any alarm of serious, and potentially hazardous, deviations in process conditions. By the book, no response is needed as alarm systems are not safety related. Is it so?
My example is the high-pressure pump of 6 MW used in reverse osmosis desalination megaprojects. This piece of equipment with a price approaching US$ 1 million has nearly 30 sensors measuring vibrations and temperatures at different locations. A signal from any sensor may trigger an alarm (High signal) or a safety interlock (High-High signal) shutting down the pump. "Plenty of room" mentioned above starts when the operator gets 2 High signals from different locations. What if he/she gets 3 High signals? The dilemma is
When H + H = HH?
Search for an answer invariably steers us away from a trivial discrete alarm/interlock model to a new one - a merger of safety control and predictive maintenance. With the operator as a gatekeeper.
The current approach to operator training is based on the premise that the operator shall assume full control in case of the control system failure. In reality, 78% of system failures are due to human errors (EU-OSHA report). In power generation this figure is 70%. The other reason why control take-over is impossible in emergencies is the Process Time. It is introduced to describe the process dynamics. For filtration, the process time is measured in minutes, while for the RO system – in seconds. Enough to make an error.
After all, to err is human. This inherent trait is a good excuse to start training not from process basics, but from HRA – human reliability analysis. It aims at quantifying the likelihood of human error for any foreseeable task. Examples are loss of signal, out of range values, loss of communications, power failure, instrument air failure, etc.
On the System side, any alarm shall be followed by clear response procedure stored in HMI software. Have you ever seen such procedures? I have not. At best the operator gets a short message – bookmark into what she/he learnt in training. They are normally left out of scope. The cornerstone problem is that the cost estimators and the schedule planners deal with hardware – equipment, piping, foundations, etc. Budgeting software and database engineering is beyond their understanding.
Both sides of HSI are not perfect, but their interaction generates data of great value – basis for future "self-driving" and self-learning systems. So my final question is
How to make these data shareable?