This article is a summary of a forthcoming white paper. Do not hesitate to send us an email at firstname.lastname@example.org to receive it when it becomes available.
Over the last two years, we have met and interacted with +100 users, prospects, customers and even industrial decision-makers from all industries.
The subject ? Connect the workshop and be able to play with production data in real time.
Report process and production data in time to make “data-driven” decisions. Indeed, a connected workshop means real time for monitoring, alerting and machine recalibration, production orchestration, predictive maintenance, detection of anomalies and scrap, etc.
In 90% of cases, the questions, attempted resolutions or chosen solutions are often the same. So we decided to give you some keys to moving forward in the fog.
What you told us:
- Real time is essential.
- We don't know what's in the machines because the machine is closed. We bought special machines, everything is blocked by the supplier.
- My machines are not connected, the gain is uncertain for the return on investment.
- Production networks are outside the scope of IS and therefore not included in transformation plans (this tends to evolve).
- We don't have any automation engineers here. It’s maintenance, methods or production that manages this. They don't know about controllers/automatons. We don't dare touch it.
- We have an MES, we've had a year of implementation, the environment is completely closed, we can't release anything, we spend our time doing custom development and the users are not satisfied with the result.
- We did it with open-source bricks (node-red for example), but it is not reproducible on scale. The robustness of the system is not assured.
According to McKinsey, connecting the workshop and sending the right data to the right person in real time is a more than significant source of ROI:
- up to +90% productivity.
- up to -50% of unplanned machine stoppages.
- up to - 40% on maintenance costs.
And that's without mentioning the financial and ecological gains represented by the reduction in consumption: waste, material and energy losses, the costs of which are currently exploding in Europe.
We can also note the gains made by scaling projects in record time, better inter-site communication & hybridization of cloud / edge services, less demanding than full cloud .
It is therefore urgent to roll up our sleeves.
But then what to do, and above all how to do it?
Digitizing the workshop means first of all understanding that we can go much faster by iterating and above all, that today it is possible. The problem therefore becomes a question of software and no longer of hardware and good news: solutions exist.
We'll give you a quick summary of what we recommend at Niagara, to help you in your thinking:
Think about accessibility to machine data, sometimes closed : your choice may be towards a platform that can be deployed anywhere, close to the machines (servers / gateway / HMI / production computer). Take the time to check the native and multi-protocol connectors of the chosen solution.
Think about software accessibility : set up a no code platform which allows any engineer or automation specialist to read the data from their machine and work on it / set up interfaces allowing ML to be configured without having to code, such as Automi For example.
Think flexibility : New technologies signify the end of the MES. Take an open platform, whose infrastructure is micro-services. the advent of Edge & pub-sub & stream type technologies (kafka, MQTT, etc.) will allow you to avoid overload on the production network.
Think scaling and industrialization: Like everyone else, you don't want to fall into “pilot hell”. Verify that you can deploy digital twins of your factories, and deploy instances easily. You can then begin to standardize your cross-plant OT data models. You can create your own on a platform like Niagara or use platforms like Azure Digital Twin or AWS Digital Twin for example. OPC UA is a robust standard, but check that the solution you choose does not require you to use this protocol necessarily, to have the choice of having to maintain and manage it, or not.
Think use cases: Don't forget that the platform must be able to allow you to stack different use cases, check that your choice is a platform with no-code IT connectors to send the data to different analysis software or your databases for big data analyses, for example.
Think about teams IT, as well asto security : a hybrid platform which will allow group IT to monitor the edge s deployed on the different sites as well as the access policy/incoming/outgoing flows, etc...
If we summarize, for the dissipated:
#1 - Technologies today make it possible to deploy hybrid tools on a scale ( Edge & Cloud ) which will tackle the collection and processing of industrial data at the machine and will unlock many new uses.
#2 - To tackle the subject: think about use cases, forget about technology, define clear objectives then a restricted scope and iterate. Enlarge the perimeter and start again.
#3 - No & low-code tools are perfectly suited to iterate quickly without having to train in code in the factory.
#4 - micro-services, flexibility and scalability will allow you to avoid pilot hell and the famous V-cycle of legacy tools.
And you, what do you think? What have you implemented in your factories?