Probably the first reaction of an inquisitive reader to this title is that it is a mistake. By definition, Data Acquisition (DAQ) is the process of digitizing data from the world around us so it can be analyzed and stored in a computer. DAQ converts signals into digital numeric values through a chain of sensors, signal conditioning, and analog-to-digital converter.

What is described above is the HOW part. The WHAT & WHY parts - the foundation of the future DAQ - are entirely missing. WHAT data should DAQ collect and WHY should it be collected? These are needed to tame uncontrollable big data "eruptions" by future mega-plants. (It is already on the rise judging by O&M reports I get.)

WHY is the hardest part. Try to answer the following question. The 6 MW pump is equipped with 17 sensors (full quantity - 29) measuring vibrations and temperatures. How many sensors should have the 1.5 MW pump of a similar design? Should the designer logic be scaled down in the same proportion?

So this title is not about control system hardware. To explain its objective, let's start with a trivial example.

To maintain the operation of the chemical dosing system we need to know the fluid volume left in a storage tank - not the fluid level. DAQ should automatically transform level into volume without manual programming by the control engineer.

To do this our DAQ should have some function operating on dynamic data generated in real time and static data describing the physical asset. The function output is derived data.

This transfer function (TF) introduced by is a cornerstone abstraction of the future ADAQ - any data acquisition system - receiving, labelling, sorting, transforming, and broadcasting data to various recipients.

As derived data may be easily reconstructed, recipients should have access not only to dynamic process and static data but to TF too. So TF should behave like data; it should have GUI - a globally unique identifier. It is a codified address; without it any data and ADAQ are useless.

Can a control engineer build GUI-based ADAQ? No chance, because static data not owned by the control engineer is 100 times bigger (!) than the dynamic data sources count.

The metrics obtained with PlantDesigner show that for a small mega-plant of 100 MLD capacity ADAQ should serve about 300 instruments and have access to nearly 30000(!) static parameters and over 550 TFs.

The unfavorable ratio between dynamic and static data has serious implications concerning AI's future in plant operation and predictive maintenance. To correctly read dynamic data, AI direly needs static one forming the context of the problem. The same reasoning may be applied to the asset O&M digital twin. In other words,

Neither AI nor O&M digital twin will succeed in process engineering unless they are rooted in ADAQ.

It is the design/process engineer who is the nearest to building TF-driven ADAQ even though that she/he hardly knows anything about signal conditioning and analog-to-digital converters.

TF is the core of the language for describing how the plant should be protected, controlled, and monitored - knowledge owned by the process engineer.

This knowledge never crosses the interface between the latter and the control engineer in a logical and consistent way. It is requested by neither SCADA nor DCS. Poor control design accounts for 59% (!) of all accidents in process industries ("Out of control" UK HSE). How may AI advance in a chaotically controlled environment?

Basically, TF is a math function and a common wrapper around instrument raw signals and the asset properties. Naturally, TF absorbs all standard signal conditioning functions of PID controller like totalizing, reversing, log, etc. As mediator between process and control, TF aims to be easily converted into SCADA instructions.

A fundamental property of TF is the context link to instruments and assets -the data sources and consumers. So TF may be considered a gatekeeper for information exchange. This role explains the "transfer" word in TF.

TF is ubiquitous. It is used for the plant design validation, control loop and alarm definition, equipment performance rating, data visibility and priority definition, and data stream destinations.

Can the process engineer compile 550 TFs manually with all GUIs and tags using Excel or Access? The answer is negative. To overcome this challenge, comes with a framework for automatic TF generation. For example, it auto-generates volume-left TF for the above-mentioned chemical dosing system once the P&ID sizing is completed. The special topic is TFs for centrifugal pumps and drives. It will be discussed elsewhere. This framework uses rule-driven algorithms and business process management. Currently, they handle roughly 90% of required TFs. What about 10% left? PlantDesigner comes with a user interface for data acquisition design and management. It complies with the following requirements.

  1. Identify meters, valves, motors, and variable speed drives to be plugged into the plant ADAQ based on their specifications
  2. Categorize and visualize the usage of the said device signals - protection, control, or monitoring - for quick control design validation
  3. Edit and validate the device data destinations - database, HMI display, messaging, web, O&M report
  4. Compile templates for ADAQ reporting like "ADAQ formulas" with engineering hyperlinks. (When the link is clicked the P&ID fragment pops up with the device marked up)

The next fundamental abstraction of ADAQ is Event. It is worth discussing in a separate article.

© 2024