|
|
|
The chance of success for digitization in production is unfortunately still quite low:
“80% of digitization projects in production fail.”
There is enormous potential in digitization, but the implementation often falters. Why is this so? What are the causes?
Over the past 5 years, we have been able to assist many companies with the digitization of their production and have seen the good, the bad and the ugly.
In doing so, we were able to recognize a recurring pattern: Typically, huge projects today are put together - with long durations and large budgets - requirements documents are written, and everything is visualized in a nice digitization roadmap for the coming years.
This approach almost always ends with the hoped-for goals not being achieved and the planned project costs being exceeded. Your ERP or MES consultant will surely tell you after the second beer that a large part of his entire revenue is based on this fact.
This then leads to the result that many are disappointed with the possibilities of digitization.
But why is that? And what does an alternative strategy look like?
The main problem is the long preliminary planning of projects using the waterfall method and the pre-defining of concrete use cases. The assumptions made here, which are inevitably and largely implicit, can hardly be validated in advance. This, in turn, leads to changes and adjustments during the project, as the assumptions collide with and break under reality. Costs explode and project durations extend.
The alternative is an iterative approach, where the overarching goal is clearly defined, but concrete implementations occur iteratively based on data, facts, and figures in the project. Essentially, this is nothing other than an agile development approach.
For the typically monolithic systems that are still being developed by industrial software providers today, however, this approach does not work. Instead of trying to solve everything in one system, one should rather think in an ecosystem where data can flow freely between applications.
This article clarifies what such an approach can look like and what needs to be considered.
Defining the Overarching Goal - North Star
Everything starts with setting a clear goal, the so-called North Star, to which everything is oriented. Progress is continuously monitored using a North Star metric.
The most common goal of digitizing the shop floor and implementing IIoT solutions is to increase productivity and thus OEE:
From our experience, OEE is excellent as a North Star metric, as it encompasses all relevant areas of production (availability, performance, and quality). Changes in one of these areas directly affect the metric.
Therefore, we also set an increase in OEE as the North Star metric in our example. This means that the success of all digitization projects on the shop floor will now be measured based on improvements in OEE and the resulting cost savings.
The North Star defines the goal, and the North Star metric measures success and progress in numbers.
For our example:
North Star: Increase in productivity
North Star Metric: OEE
For greater visibility, we also recommend considering the changes in the individual areas, namely the availability, performance, and quality factors, as this makes trade-offs evident.
Let's look at this with an example: A lower rejection rate is achieved by more frequent cleaning of the equipment. The quality factor increases, but due to the more frequent downtimes, availability simultaneously decreases. Thus, it may happen that the OEE metric does not improve positively as a result of this measure.
From an economic perspective, however, it can still make sense to accept more frequent cleaning and thereby more downtime, as losses from rejections are significantly more expensive than availability losses. Only by capturing the individual factors do these effects become visible and can be weighed against each other.
Capturing the Status Quo
Before digitization, the first step is to capture the status quo, that is, the current value of the North Star metric. Ideally, this metric, e.g., the OEE, is continuously and accurately captured and calculated based on machine data.
To realize the data connection, an infrastructure needs to be created in this initial step that establishes connectivity to the necessary data points (southbound connectivity), enables secure transportation and storage of this data, and ensures that the necessary metrics can be calculated.
Building on this infrastructure, there is then the analysis layer that displays the metrics and enables analyses.
An additional important task of the platform is to distribute data to third-party systems (northbound connectivity) such as MES, ERP, or BI solutions, so that no further data silos are created. This later enables the necessary flexibility to solve all use cases with the same data infrastructure. At ENLYZE, data can be shared both directly on the shop floor via OPC UA and MQTT as well as in the cloud via APIs and integrations.
After this initial step, OEE and productivity losses are accurately and systematically captured based on machine data. From now on, OEE can be used for navigation. This also applies to other North Star metrics.
It is worth noting that very few companies reach this first step. Instead, they often work on many distributed and individual use cases. However, the overarching framework is lacking.
Deriving the First Use Case
The system for capturing the North Star metric is implemented. At this point, it is important to emphasize that especially leaders are required to repeatedly communicate this guiding framework around the North Star metric and incorporate it into daily decisions and discussions. Only then does this framework become ingrained in their colleagues and guide their thinking and actions. Now everyone is aware that digitization projects (in our example) are measured by the improvement of productivity and thus OEE.
The next step is to identify the first use case for increasing productivity. Here, the framework of the 6 big losses helps us identify the greatest losses and clarify the following questions:
Which use case has the largest measurable leverage on the North Star metric? In our case, on OEE.
Which use case can be implemented relatively quickly and easily (low-hanging fruit)?
In this phase, it is essential to find out where the productivity losses are buried and what their causes are. A new transparency for your production is created. One of our clients once said that this step is comparable to the moment you put on glasses for the first time. You realize how blurry and inaccurate your perceptions were before.
This new transparency is now used to identify the largest productivity losses and derive potential measures. Subsequently, based on an effort assessment, costs and benefits of the measures are compared. The measures are then prioritized, and the top use case or best measure is addressed.
Important in this step: It is not only about implementing new systems, dashboards, or other digital products but truly focusing on the processes and understanding how they can be improved. There are cases where digital systems can enable tremendous potential for more efficient processes. More often than not, however, it is simply about acquiring another tool cart to reduce setup time.
The data only highlights the problem areas, but it is the changes in processes that lead to actual improvements:
“If you digitize a crappy process, you have a crappy digital process.”
Thorsten Dirks
As improvement measures become apparent only in this step, it is important to have chosen a flexible and data-permeable platform. If this is not the case, a parallel infrastructure must be built for each use case. Unfortunately, this is the norm today and one of the reasons for the high complexity and costs of digitization on the shop floor.
Why flexible? If the infrastructure is flexible, it suffices to add the new data points relevant to the use case.
Why data-permeable? Data permeability must be ensured to share the necessary data with the systems relevant to the use case.
If both are given, then it does not matter whether the problem lies with unplanned downtimes and the solution is a maintenance dashboard or whether there are problems with idle time and the solution is inline monitoring with a live dashboard and alarm function for the workers. The infrastructure remains the same; only the application layer changes. The result is significantly lower costs and a much faster implementation.
This effect of rapid and inexpensive development and redevelopment of use cases is particularly important: Because studies show that there is not a single use case for successful digitization. It is rather an individual combination of different use cases. The challenge is to identify the relevant use cases, implement them quickly and easily, and thus realize real added value.
“No single IIoT use case is a silver bullet at scale, so broad execution (implementing multiple use cases and getting them to scale over time) matters more"
Source: McKinsey
Today's monolithic systems, which try to solve everything in one application and map all functionalities via one system, are not capable of doing so. This is why parallel infrastructures with high costs and high complexity arise.
One must rather think in an ecosystem where data can flow freely from one application to the next. The task of the platform is to enable this free flow of data and adaptations. The infrastructure for all use cases remains the same and only adapts to new circumstances. The applications for individual use cases can be exchanged, adjusted, or developed themselves.
Measuring Progress
After the measures are implemented on the shop floor, the next question is: Is the approach successful? This means: progress must be continuously monitored. Are the hoped-for successes materializing? Am I seeing an improvement in OEE or in availability, performance, or quality? These questions and metrics should be reviewed weekly.
If so, very good. With the help of the data, success can be communicated on the shop floor and to management, thus maintaining motivation and willingness to change. If the use case is successful, it can now be rolled out across all plants/machines.
If the metric does not budge an inch? Then the assumptions about the cause of the problem may have been incorrect. This is where the strength of the agile and iterative approach shines: Due to the rapid feedback (Time to Insight), one can quickly intervene to correct and re-examine the causes and the problem.
Ideally, as is the case with the ENLYZE platform, changes are possible without IT resources. This allows necessary adjustments to be made quickly and without high costs or coordination needs by production experts themselves.
Rinse & Repeat
The measure was successful. Losses in that area have been reduced. Due to the continuous monitoring of the 6 big losses, it is apparent that the next largest loss now has a different cause.
The process starts over. Identify the problem, conduct a cause analysis, derive measures, monitor whether the measures have an effect on the problem, adjust if not, and if successful, then move on to the next problem.
Many will also recognize this process - the PDCA cycle or the process of continuous improvement. So not a new framework but something that has proven itself in the industry for decades.
However, data availability and the resulting transparency act like steroids and help to complete the PDCA cycle faster than ever before. The wheel of continuous improvement spins faster.
Summary
Away from lengthy requirements documents and pre-planned digitalization projects that miss your goals. Towards an iterative, agile approach built on a flexible data infrastructure, where one can find use cases with the highest effort-to-benefit ratios using numbers, data, and facts. Away from a single use case, towards an individual combination of several added values.
The 6 Big Losses and OEE as the North Star metric make progress measurable and feedback tangible. The question of whether one is on the right path can be answered quickly and at any time.
A flexible infrastructure allows adaptation to these constantly changing circumstances. Adjustments should be possible in-house to ensure speed and keep costs from escalating. Data permeability ensures that data is made available to other systems. Thus, the best solution can always be used for the application case, and one is not trapped in any system.
The ENLYZE Manufacturing Platform was developed based on these fundamental principles. Adjustments are possible at any time, and data can be shared in real-time on the shop floor via its own OPC UA server and MQTT broker. Integrations into third-party systems like Grafana and PowerBI allow building your own solutions without development effort. The API and Python SDK enable your data science team direct data access and make it possible to realize your own use cases without high development effort.
ENLYZE has combined decades-tested frameworks (Continuous Improvement, 6 Big Losses, OEE) from the industry with the latest technology and data to make your path to increased productivity as simple as possible.
If this sounds interesting to you, then book a demo!
Become an OEE expert with our OEE series
Here you will learn how to calculate and improve the OEE in the long term.