Introduction

In this blog post, we'll explore the impact that Generative AI can have on the productivity of service operations in industrial and manufacturing enterprises. And why Waylay is ideally suited to harness the power of large language models in combination with predictive and preventive maintenance asset monitoring rules

What can generative AI mean to a service agent?

We'll focus on the persona of the service agent. There are different types of service agents depending on their level of technical expertise, whether they are outsourced or not, whether they are part of a group of external service dealer chains, etc. You also have field service agents who go into the field to perform asset maintenance activities.

What they all have in common is that they need to perform their job as efficiently and fast as possible. The Generative AI technology boom is not here to take away these people’s jobs, but rather to complement them with an intelligent assistant.

We’ve extended the Waylay Digital Twin application for Salesforce Service Cloud with the Waylay Digital Twin Rule Explainer assistant.

If you’re a Service Agent whose task it is to work on predictive and preventive asset health notifications (‘Waylay Alarms’), your job is to determine as efficiently as possible what should be done to resolve the issue and prevent asset downtime. This often involves a decision to create a case, support ticket, a work order, escalate the problem to a different team, etc. In industrial and manufacturing enterprises, this decision is not always fully automated and requires a human being to provide the necessary technical context to turn the proactive asset health notification into one or multiple cases and to provide their own assessment of the problem based on their own expertise.

Our Waylay Digital Twin Rule Explainer assistant is going to help them create that technical assessment.

In the above example screenshot, we’re dealing with a proactive battery health notification, generated by the Waylay Rules Engine, which indicates that the asset, an electric bus, is at risk of shut down due to battery degradation. The information about this ‘Waylay Alarm’ is somewhat terse and dependent on the quality of information that was entered by the designer of the proactive rule that generated this Waylay Alarm. This latter person might be part of a completely different organization. In other words, the service agent needs to make his assessment based on the information that is on display.

To guarantee consistent, to-the-point, assistance from a foundational Large Language Model (LLM), like gpt-3.5-turbo, we had to:

  • Restrict the questions the service agent is allowed to ask the assistant. For example:
  1. Can you explain this alarm?
  2. What conditions triggered this alarm?
  3. What should I do?
  4. Can you summarize the rule logic?
  5. What is wrong with this asset?
  6. Why is this alarm firing so much?
  7. etc
  • Heavily prompt engineer the query towards the model such that the formatting is useful in an industrial and manufacturing enterprise setting.
  • Create a fine-tuned LLM model that can reason on the rule logic (see further).

In the above screenshot, you can see an example of the answer that our finetuned LLM model gave to the question `Can you explain this alarm?’ The Waylay Digital Twin Rule Explainer assistant generated an answer that is

  • Human readable
  • Multilingual
  • Structured with clear explanations of which conditions were checked and what parameter values were evaluated
  • Specific on the reason and the recommended action

This piece of text can now be used to supplement the description and subject of the case to be created. In this way, the next person, whether external or internal to the organization, has a much more contextual understanding of why this case was created, the urgency of it, and - very importantly - more trust in the solution of proactive service delivery. In case of work orders for field engineers, the result of the Waylay Digital Twin Rule Explainer can be used to populate pre-work briefs or asset maintenance checklists.

How is this even possible?

One word: explainability. Large Language Models are really good at summarizing and reasoning on structured documents. The Waylay Platform takes a structured approach to describe predictive and preventive asset monitoring rule patterns as stateful causal graphs. Once the LLM is trained on these rule patterns it is capable of reasoning on them, capable of understanding their state in real-time, understanding the parameters it evaluates or when the rule is triggering, determining the root cause of alarms, and understanding their meaning. You can’t do this on convoluted code scripts or data analytics pipelines powered by disparate notebooks. 

Waylay’s fine-tuned rule explainer LLM model has been trained on hundreds of anonymized asset monitoring rules from different markets in which Waylay has been active: industrial assets, building management, automotive, agritech, telecoms and finance. 

This means that the Waylay Rule Explainer assistant can now reason on any predictive or preventive asset monitoring rule created by the rule designer, even if it hasn’t seen that exact rule before or even if the type of asset has never been seen before. And the best thing of all: our fine-tuned model is now fully trained and every Waylay customer can use it as is, no retraining necessary! Of course, if you want to extend the answers of the Waylay Rule Explainer to include references to your own private knowledge base articles, asset manuals etc, some retraining might be necessary.

What did we learn?

Generative AI and the use of large language models (LLM) has taken the world by storm in 2023. Companies like OpenAI have democratized their use through easy interfaces and an attractive price point. As usual with new technologies, it goes through a hype cycle with loads of new small productivity apps appearing first. 

Now, we gradually see a lot of interest in using Generative AI technologies in mature industries like manufacturing. That’s a whole different ball game than generating yet another cat picture. Reliability, consistency, and contextuality are essential in these environments, where there’s no room for hallucinations at all. We believe the technology is ready in 2024 to meet the requirements of the industrial market, provided that you put the necessary guard rails in place

That’s exactly what we’ve been doing with the Waylay Digital Twin Rule Explainer for Service Agents. We’ve trained a foundational LLM to meet the needs of the remote Service Engineer and Field Service Agent, such that they can use it as an intelligent assistant and get significant productivity gains. 

Interested in a demo? Request one here: