A benchmark for evaluating rule engines

Find out what automation tools are best suited to your IoT use case, then test them against the benchmark.

A short introduction to IoT automation

IoT application development is fundamentally different from “normal” IT development. It requires bridging the physical world of Operations Technology (OT) with sensors, actuators and gateways to the digital world of Information Technology (IT) with databases, analytics and business tasks

This bridging of two worlds has important consequences over how business rules are built within the IoT application. Building logic by configuring rules straight into the code is suboptimal, unscalable, costly and time-consuming. To solve these problems, automation is key.

Automating IoT solution development requires using a rules engine or a combination of engines. In order to help you determine what type of rules engine technology best suits your use case, we have defined an evaluation benchmark, made up of seven key evaluation criteria.

7 key criteria to evaluate IoT automation tools

Technology criteria

1. Modeling complex logic

Real life is multivariable

The engine should support:

  • Combining multiple non-binary outcomes of functions (observations) in the rule, beyond Boolean true/false states.
  • Dealing with majority voting conditions in the rule.
  • Handling conditional executions of functions based on the outcomes of previous observations.

2. Modeling time

Time adds complexity

The engine should support:

  • Dealing with the past (handling expired or soon-to-expire information).
  • Handling the present (combining asynchronous and synchronous information).
  • Taking the future into account (forecasting for prediction and anomaly detection).

3. Modeling uncertainty

Uncertainty is unavoidable

The engine should support:

  • Dealing with noisy sensor data and missing data.
  • Dealing with unstable wireless sensors, fully dependent on battery lifespan.
  • Dealing with intermittent network connectivity or network outages.
  • Dealing with unreachable API endpoints.

Implementation criteria

4. Explainability

The engine should be explainable, allowing users to understand why rules are fired and to identify and correct errors. The engine’s internal complexity should not come in the way of its users being able to easily test, simulate and debug that complexity. Users also require a high level of understanding and transparency into decisions with inherent risk.

5. Adaptability

The engine should be flexible enough to support both commercial and technical changes with minimum friction, such as changing customer requirements or changes in APIs. In order to account for future growth, the rule engine should be easily extendable and capable to support integration with external systems.

6. Operability

The engine should be operationally scalable. When deploying applications with many thousands or possibly millions of rules running in parallel, the engine should effectively manage the large volumes, by supporting templating, versioning, searchability, bulk upgrades and rules analytics.

7. Scalability

The engine should provide a good initial framework and abstractions for distributed computing to enable easy sharding. Sharding refers to components that can be horizontally partitioned, which enables linear scaling – deploying “n” times the same component leads to “n” times improved performance.

Related Content