Resource Data

Data Best Practices for Successful AI Deployment







Data Best Practices for Successful AI Deployment

Applying time series AI to industrial operations can lead to improvements in equipment availability, production quality, and overall plant performance. However, even the most successful manufacturers will find that it is possible to further improve the results. The principle of GIGO (Garbage in, garbage out) dictates that the quality of the output is as good as the quality of the input. This means that if the input signal or the resulting data is suboptimal, the analysis layer may return suboptimal inferences. For example, a false positive could mean a slowdown in production to perform unnecessary maintenance, or a false negative could lead to unplanned downtime. If your data collection and use practices are streamlined, such situations can be avoided, and you can extract this final benefit from deploying AI. Here are some data guidelines or best practices that we have gathered from working with our clients over the years.


Have a clear goal


Spending on data infrastructure such as data lakes without first identifying the problem to be solved and the data actually needed usually ends up being a waste. This often results in a sub-optimal realization of the value of the collected data. The reason for this is that by the time the organization aligns to how the data should be used, the data may be malformed or rendered unusable because it is no longer relevant, or has no the context provided by plant operating conditions which were uncaptured and therefore cannot be correlated with the data lake. A better approach is to uncover relevant patterns and surface insights from today’s operational data using early intelligence practices and working in the present. As AI pioneer and educator Andrew Ng says, “start with a mission.”


Data collection discipline is important


It is often surprising that sensors are not working properly or that details about the reasons for downtime are missing or not specific enough. The asset model of many EAM systems is superficial and therefore results during repairs are not attributed to root cause components. This makes it difficult to determine the “ground truth” needed to train AI models.


Use standardized tags


It is not uncommon for plants to change label names over time. While name changes do not pose a real problem for post-event forensics, they can pose a significant data integrity issue for off-the-shelf real-time AI algorithms. Other data management activities such as streamlining columns and rows are not significant challenges for desktop analytics, but require intelligent signal processing for self-supervising AI. A successful AI platform should be able to automatically alert operational technology teams to unexpected tags or tag name changes.


Ensure adequate sensor data is available: When looking at “integrated equipment”, it is often the case that only alarms and status alerts are available while physical analog signals are missing or restricted in the controls of the device. ‘equipment. Make sure your equipment provider can share the necessary signal data. It is important to ask whether the sensor signals needed to characterize the problem being treated are actually available.


Commit the resources required for data maintenance


This does not mean that a data scientist is needed; in fact, a reliability engineer or field maintenance manager can ensure that data quality is maintained, events of interest are recorded correctly and in a timely manner, and conditions detected by the IA are correctly registered and processed (if necessary). These activities become less cumbersome with easy-to-use AI applications, however, there is a need to dedicate resources to providing continuous feedback to the AI, especially during model training and revisions. Consider that AI is like a smart but fresh employee that needs to be groomed and cared for. Good data management will enable AI to reach its full potential in delivering results.


Get the right data type


Signals for electromechanical conditions as well as process data must be taken into account. For example, in pump impeller failure detection, one can look for vibration or amperage signals to know if the equipment is reaching a critical failure condition. However, adding upstream process pressure and temperature can give an early indication of cavitation which is the reason the impeller is eroding.


Let the data surprise you


If you follow all the data best practices above, a successful AI engine should be able to unearth all sorts of valuable insights and facts from the data, which can be used to affect positive changes in behaviors. operations that show benefits over time. It is important not to have tunnel vision by only looking for failure modes. One needs to think about questions such as: is there anything else that an anomaly identified by AI can mean? Is the difference in sensor value related to a loss of calibration? Is there an implication on product quality that we don’t see because we focus on predictive maintenance?

Essentially, there is a tendency to miss the forest beyond the trees. To avoid this, let the data speak to you. Let him tell you the story. In the factory, domain experts make assessments based on their experience or training, but in a data-driven world, that’s actually antithetical. A reframing of thought processes is needed towards a data-driven mindset. After all, if it was a question of expertise, the factories already have enough experts.

Capable AI facilitates the analysis of unusual conditions and provides enhanced continuous monitoring. With the right type of data and easy-to-use AI, maintenance teams gain visibility into equipment variables they didn’t have before, resulting in various improvements beyond mode detection. of failure.



About the Author


Sheetal Birla leads marketing for Falkonry. She is responsible for the vision, strategy and execution of marketing and growth initiatives and, in this role, engages both customers and partners. Sheetal is a technology evangelist with expertise in bringing digital solutions and transformation to clients in mature and emerging markets. Prior to Falkonry, she held technical and marketing management positions at Samsung, Siemens and British Telecom. Sheetal holds a BS in Computer Science and Engineering and an MBA from INSEAD.



Did you enjoy this great article?

Check out our free e-newsletters to read other interesting articles.

Subscribe