|
|
|
Can you tell just off the top of your head what the temperatures and pressures on the controls of your 24 machines with 5 different controls are called? If you happened to not program these machines yourself two weeks ago, the answer is probably “No”. This is how it goes for almost all the companies we talk to.
When programming the control, PLC programmers decide on the naming of the measurement and control variables. So you can't just request “the throughput.” This remains hidden behind such cryptic names that we have found on controls:
Throughput_A
A2_ftpt_svktpt_a0
A_EXT_ACT_KGH
P_Netta_Estrusore_A_KgH
section[1].extruder[1].throughput.act
That control systems can also exhibit more than 100,000 variables makes the search for the few relevant ones an exercise in patience. We first show where and why it gets stuck, and then explain how a harmonized data foundation can lead to this work needing to be done once and never again.
Overview of the Blog Series on Connectivity & Machine Data:
Digitalization Dilemma: Working for the Data or Working with the Data
From Euromap, Data Blocks and Harmonized Data
Ready for All Challenges in Production with ENLYZE and Grafana
Lesson #3: Finding the Right Data Points on the Machine: The Search for a Needle in a Haystack
In a perfect world, you plug a cable into the control and the desired temperatures, pressures, and speeds are recorded. Unfortunately, this is not the case, and the right values must first be found before they can be recorded.
🏷️ People think in pressures and throughputs - Controls think in “Data Blocks” and “Tags”
Each control consists of data points that control and monitor the production process. These include a variety of target values, actual values, limits, and indicators of the state of a unit (On/Off). Text fields, such as the indication of the material used, are also present as data points. In the context of OPC, one speaks of Tags or, in the case of S7 controls, of Data Blocks. We generally refer to these as Variables.
The problem: There is no global standard for naming variables of certain process variables, such as pressures, temperatures, or throughputs. PLC programmers decide this based on internal schemes. Specifically, this means: To read the temperature of a Siemens S7-1200, the value can be hidden behind DB20:1, DB40:3, or some other data block.
The differences arise from the hardware used (Siemens S7, B&R, etc.), the protocol, and the design of the machine itself. Even with machines from the same manufacturer, one cannot be sure.
In fact, it is more the rule that even these have different data blocks and tags for the same process variables when they come from different years of production.
🇮🇹 Parla Italiano?
In all our customer projects, a similar picture emerges: No company exclusively owns machines from a single manufacturer. In a global market, machines are also acquired globally, and often PLC programmers choose variable names in their respective national languages. Thus, on machines from Germany, Italy, and England, respectively, German, Italian, and English variable names are found on the controls. Additionally, the use of non-standard abbreviations makes the mix perfect.
Good luck, Buona Fortuna, and Good Luck!
🗺️ What about Euromap and Co?
Euromap is often presented as a buzzword and the ultimate solution for all problems at trade fairs. However, our experiences show that the reality looks different.
The Euromap Standards are specifications developed for the plastics and rubber industry and exist either independently or as OPC UA Companion Specifications. Simply put, they define standardized names for variables that represent the same process variables in different systems.
This initially sounds promising but has a catch: The specifications focus on a very small group of variables that are mainly relevant for simple applications, such as transmitting aggregated values to the MES.
Once you leave this path, you quickly encounter the limitations of the Euromap standard. For example, a stacker and its process variables are not part of a single specification.
Furthermore, we have painfully learned: Even if a machine is sold as, for example, Euromap 77 compliant, not necessarily all variables from the standard can be found on the OPC UA server. It only means that the variables available on the server (and only those defined in the standard!) follow the specified naming convention.
Are you in a different industry and have had better experiences there? We would be very pleased to receive an email at hello@enlyze.com and have an interesting exchange!
🔍 A Data Source, Thousands of Variables
Before one has connected to the machine, one cannot say how many variables are present on a control. In the past five years, we have seen everything from 10 to 1,000,000 variables per data source. Therefore, it is not easy to estimate the effort to find the few relevant process variables.
The average data source on the ENLYZE platform records 13,607.55 potential variables, of which about 67 are recorded per data source. Or in plain German: You have to search through tens of thousands of variables to find the relevant 0.5%.
If this task also has to be performed in the workshop while 300 kg of molten plastic is being shot upwards through a ring, then this is not a pleasant task.
It is made worse by the fact that it is rarely enough to just work with one machine - you have to approach each machine. When you also consider the lack of standardization of variable names, we fully understand why companies shy away from this challenge. Especially since this alone does not create any added value.
⛑️ Data Harmonization: Like Euromap, but for all Your Use Cases
Manufacturers must define the scalability of the use cases as one of the main goals of their strategy on the road to Industry 4.0.
To ensure that the work described in the points above needs to be done only once and never again, it is necessary to build an abstraction layer that retains the complexity of the machines and data sources.
This is achieved through Data Harmonization: An enterprise-wide data and information model maps machines, relevant process variables, and aggregations like KPIs, energy consumption, and CO2 equivalents and assigns them an identifier (identifier) that can be used to query the data independently of the underlying machine.
A simplified information model is illustrated in the above figure: It defines a concept of a machine (with identifier "machine") and two process variables, the withdrawal speed (identifier "haul_off_speed_actual") and the throughput (identifier "throughput_actual"), that are associated with a machine.
Each machine is now created along with the two process variables in the information model, and the variables of the machines are linked to the respective process variables in the information model: In our example, the throughput on data source 1 for machine 1 is behind the tag "A_EXT_ACT_KGH" and the withdrawal speed is behind the tag "r_iAbzugsgeschwdgk". On machine 2, on the other hand, the throughput is behind the tag "P_Netta_Estrusore_A_KgH" on data source 2 and the withdrawal speed behind "P_Dati_Motore_A_Mt_Minuto" on data source 3.
After assignment, users and applications can now query the withdrawal speed for each machine using the identifier haul_off_speed_actual in the data model without knowing that it is hidden for machine 1 behind "r_iAbzugsgeschwdgk" and for machine 2 behind "P_Dati_Motore_A_Mt_Minuto".
The creation and linking of an information model are no small effort, but the benefits are enormous: Unlike currently in the industry, there is only a one-time large integration effort for the first use case. Edge devices are installed, controls are connected, the information model is built, relevant variables on the controls are identified, and linked to the model.
Subsequent use cases can, however, be established on this created platform with very little effort.
Process engineers no longer have to run to the machine every two hours with USB sticks to capture data from various machines for 5 minutes and then overlay the individual process variables in Excel. Applications that capture the Product Carbon Footprint can also be quickly tested on one machine and then rolled out across the entire machine park. Once the machines are digitized and represented in the information model, so-called economies of scale come into play, as we have seen in software companies over the past 30 years.
Conclusion
The lack of a global standard for naming variables at the control level leads to enormous workloads when one wants to capture certain variable sets in a machine park. Each time, one has to fight through thousands of variables with new cryptic naming schemes and look for the proverbial needle in the haystack.
To avoid having to do this effort only once and never again, a central component of IIoT platforms is the harmonization of data. By creating and assigning a company-wide naming scheme, applications and services can use the same variable names across machines to access the same physical value, such as the extruder throughput or the nozzle speed. This reduces complexity and enables rapid development of proprietary applications and services.
At ENLYZE, we have been dealing with this topic for five years and had to solve these problems ourselves to roll out our application for production quickly. In the following article, we will take a closer look at today's topic and show how to easily explore variables in machines from anywhere using the ENLYZE IIoT platform and establish a uniform naming scheme.
If you have any questions, feedback, or a contrary opinion in the meantime, feel free to contact us by email: hello@enlyze.com. We look forward to hearing from you.
Overview of the Blog Series on Connectivity & Machine Data:
Digitalization Dilemma: Working for the Data or Working with the Data
From Euromap, Data Blocks and Harmonized Data
Ready for All Challenges in Production with ENLYZE and Grafana