Abstraction is the removal of details in order to enhance the visibility of a pattern. A useful abstraction is one that removes things we don’t need to concern ourselves with in a given context. In software development, we refer to abstraction as declarative programming. A rule engine is a software tool that enables developers to model the world in a declarative way.
In his blog “Declarative vs Imperative Programming” Mundy Follow writes:
- Declarative Programming is like asking your friend to draw a landscape. You don’t care how they draw it, that’s up to them.
- Imperative Programming is like your friends listening to Bob Ross tell them how to paint a landscape. While good old Bob Ross isn’t exactly commanding, he is giving them step by step directions to get the desired result.
Declarative programming is also understood as expressing the logic of a computation without describing its control flow. Propositional Logic (assumes the world contains facts) and First Order Logic (assumes the world contains objects, relations and functions) are integral parts of every computer language, so one may argue the computer language is all that is needed to enable developers to write algorithms and conditional statements (rules).
For software developers, a rule engine is useful only if it liberates them from expressing the rule in the code. Therefore, the goal of a rule engine is to bring this abstraction to the next level. Any time a developer fails to solve a particular rule (use case) with a rule engine, she will eventually be forced to “solve it in the code anyway” – which means that she will have to now manage two abstractions in parallel, one in the rule engine and one in the code, which is a nightmare.
For software developers, a rule engine is useful only if it liberates them from expressing the rule in the code.
In order to avoid this pitfall, it is commonly accepted that we should use rule engines only if appropriate, or not use them at all. Over the past decades, that has become a self-fulfilling prophecy. Driven by the idea that if something doesn’t work, we’ll have to sort it out in the code anyway, we have set limitations to what rule engines can do while at the same time defining the set of problems which we feel are suitable for being safely addressed by rule engines.
Here are three results of this self-fulfilling prophecy.
- A BPM (Business Rules Management) rule engine is a Finite State Machine. We have a set of defined states, and defined transitions between each state- the individual message. BPM rule engines are capable of process modeling using state transition diagrams, but they are not good in dealing with real time data.
- Flow engines are good in dealing with real-time data but they are extremely hard to debug and reason about. They consume events by chaining functions with messaging passing from one function’s output to the next function input and trying to debug just by looking at the flow diagram graph is very difficult.
- Condition/action (IFTTT) based rules are good in executing simple scenarios but are unusable for anything that is more complex than linking one input to one output.
Developers have realised that some problems are extremely difficult to express and solve using existing rule engines. They are right. And that’s the main reason why there is a healthy scepticism around using rule engines.
Here are some of the major shortcomings of existing rule engines:
- rules engine limitations in expressing High Order Logical (HOL) constructions
- rules engine limitations of dealing with time dimension (information only valid for period of time, or merging streams that are not fully in sync)
- rules engine limitations of dealing with both synchronous and asynchronous events
- rules engines don’t provide us with an easy way to gain additional insights: why a rule has fired and under which conditions?
- rules engine can’t model uncertainties, e.g. what to do when sensor data is noisy or is missing due to a battery or network outage.
- rules engine limitations in enabling developers to extend its integration capability with external systems
- rules engine limitations in simulations and debugging
In our following post we will look at some of the ways in which they have been addressed and present our own approach to each of these challenges.
To find more about the Waylay engine and internals, go to our documentation page or read the following blogs on the same subject:
- The Waylay engine, Part 1: One rules engine to rule them all
- The Waylay engine, Part 2: Bayesian inference-based programming using smart agents
- The curse of dimensionality in decision trees – the branching problem
- Rule patterns
- Creating applications with cloud functions – how to manage rules and orchestration in serverless architectures
- AI and IoT, Part 1: Challenges of applying Artificial Intelligence in IoT using Deep Learning
- AI and IoT, Part 2: Deep Learning and Bayesian Modelling, building the automation of the future
- AI and IoT, Part 3: How to apply AI techniques to IoT solutions – a smart care example