In this blog post, I would like to share our latest success story from a valued customer, whom we regretfully cannot name due to confidentiality reasons. This customer's journey with us revolved around the application of predictive analysis of failures in heavy machinery. It delved into the likelihood of these machines shutting down under specific conditions, a crucial aspect to optimize post-sales support in many organizations.

The challenges in this realm are substantial, as ensuring timely spare parts delivery to customers or informing the customer about incorrect operation mode is a paramount. Any delay or machine interruption on the customer's side can lead to disruptions in production or servicing commitments. 

In this particular case, the customer initially sought a solution by collaborating closely with one research unit. The conventional approach involved gathering test data and developing an advanced machine learning algorithm—a promising strategy on paper.

However, after months of development, the results on the customer side were disappointingly poor. The main issue stemmed from an ML model that was too complex to maintain, it only worked under very narrow conditions and generated an excessive number of false positives and negatives. Tweaking the algorithm to yield the desired results proved to be extremely challenging, if not impossible.

Furthermore, communication with the business stakeholders proved very difficult and cumbersome. These stakeholders didn’t understand the logic, couldn’t grasp the end to end use case, and multiple iterations with miscommunications left all parties frustrated.

This is where Waylay came into the picture, offering a different approach. We decided to start with simplicity, examining thousands of assets and meticulously analyzing the customer's datasets. In collaboration with the customer, we then defined specific problems within multiple use cases, tackling them one at a time. Our approach was data-centric, grounded in the development of straightforward rules and thresholds, based on the observed data.

Through this iterative process, we managed to devise simple rules that scrutinized the characteristics of the dataset, reducing service downtime significantly. As the exercise unfolded, we encountered instances where certain settings failed to identify problems reported by the customer—what we call false negatives. Over time, we refined these rules, incorporating insights gained from testing phases, and progressively achieved better outcomes.

However, we also encountered false positives—instances where alarms were triggered unnecessarily. This created a delicate balance akin to a yoyo effect. Working closely with the customer, we ultimately struck the right equilibrium between false positives and negatives, significantly enhancing the prediction of failures to an impressive accuracy rate of 80%.

What does this story tell us about the right approach to solving such problems? First and foremost, it highlights the fallacy of either dismissing machine learning altogether or attempting to be overly sophisticated from the outset. Without the means to adjust models with the right data, starting too smart upfront can be counterproductive.

The key is to begin with a strong foundation—a robust dataset that is rigorously validated and annotated, even if it means working with limited results initially. Over time, as we become more adept at problem-solving, we can introduce machine learning models. Meanwhile, customers can already reap the benefits of what some might call a brute-force approach, while we work behind the scenes to fine-tune and optimize the process.

The lesson learned here is that there's no need to make an exclusive choice between writing code or setting rules versus diving into heavy machine learning models. It should be a gradual process, starting with simple, adjustable rules that can be quickly validated and then integrating more sophisticated models as we gather more insights and obtain the right validation and training data.

In our company, you don't have to commit to one approach over the other. What's crucial is that our solution allows for both strategies, eliminating the need to front-load excessive research costs. Simple, effective techniques can provide immediate benefits, and with time and improved understanding, we can harness the power of scientific methods to enhance our solutions. Equally fundamental is the ability to adapt to business needs quickly. Communication channels with all stakeholders must be open, frequent, brief and effective, such that you generate a snowball effect of enthusiasm in the organization.

Ultimately, this story underscores that engineering is about applied science. While we certainly trust science, our primary focus is solving problems in the most pragmatic way possible. As we move forward, we continue to embrace the principles of science to refine and enhance our processes. Our solution serves as the perfect means to achieve this goal, offering the flexibility to adapt and evolve with the ever-changing landscape of technology and industry needs.

About the Author

Veselin Pizurica is the COO and co-founder of Waylay. Throughout his career, Veselin worked in the fields of cloud computing, semantic web, artificial intelligence, signal and image processing, spherical lattice coding, pattern recognition, home networking, MPLS routing, xDSL troubleshooting, and optical networks. Veselin holds 12 patents in the domain of artificial intelligence, xDS, home networks, and cloud computing. He has been the driving force behind the creation and finetuning of the digital twin concept in the IoT developer community.