The Waylay engine is the rule engine that separates information, control and decision flow using the smart agent concept: where sensors, logic and actuators are separate entities of the rule engine.

Update (November 2020): This presentation, in the extended form was presented at Serverless Architecture Conference in Berlin 2020: “Solving the weak spots of Serverless with Directed Acyclic Graph Model”

Waylay lambda functions (𝛌) are defined as either sensors or actuators. Sensors are “typed” 𝛌 functions, which can return back a state, data or both.

Any time sensors are executed their results (in the form of both sensor’s data and sensor’s state) are inferred by the rule engine, which may result in execution of actuators (other 𝛌 functions) or other sensors.

Logic creation

Rules are created using a visual programming environment with drag and drop functionality, see the screenshot below or via REST calls. Once rules are created, they are saved as JSON files. The visual programming environment allows the developer to make use of the library of sensors and actuators, logical gates as well as mathematical function blocks.

Tasks

In the Waylay terminology, tasks are instantiated rules. There are two ways tasks can be instantiated:

  • one-off tasks, where sensors, actuators, logic and task settings are configured at the time the task is instantiated
  • tasks instantiated from templates, where task creation is based on the template (which describes sensors, actuators and logic)

Task also defines a “master clock” of the rule, like polling frequency, cron expressions etc. (these settings can be inherit by sensors as well).
Before any 𝛌 function (sensor or actuator) is invoked, the engine makes a copy of the task context, providing, if required, results and data from all sensors executed (till that moment in time) to the calling 𝛌 function.

Let’s look a little bit closer to the picture above. With blue arrows we label the information flow, with red arrows the control flow and with green arrows we label decisions.
Two sensors are shown on the left side of the picture and on the right we find two actuators. Every sensor is composed of three parts:

  • Node settings that define the control flow (when the function is executed)
  • Sensor settings – input arguments for the function
  • 𝛌 function code itself, which returns back states and data

In the picture below we see sensor settings and 𝛌 function code:


Control flow

Sensor, a 𝛌 function, can be invoked:

  • Via polling frequency, cron expression or one time (defined either on the node level, or via the inheritance, using the “master clock” of the task settings).
  • On new data arriving. If the node can be addressed via resource, e.g. if the node is a labeled as a testresource any time data arrives for that resource function will be called. Payload, which triggered the sensor is available to the calling function (blue arrow).
  • As the result of other function calls (sensors), via state transitions of the attached sensor (depicted as the red arrow that goes from the top sensor to the other one)
  • As the outcome of multiple function executions via inference (via logical gates)
  • And of course, with all different conditions combined if needed!

Node settings are defined the moment the rule is configured:

In this example, we decided to invoke the sensor only when data arrives for testresource and we have also configured eviction time of 10 seconds. This way we decide for how long each sensor information is valid. That is also an elegant way of merging different event streams where information is valid only for a short period of time, which is a very important aspect to take into consideration when making decisions as is explained here.

Information flow

Sensors can use as the input arguments:

  • Input settings (e.g. city, database record etc.)
  • Task context (result of any other sensor)
  • Runtime data (which triggered sensor execution)

Decisions

Decisions are modelled via attaching one or multiple actuators to sensor state (or state transitions), or combination of multiple nodes/states.
For more information, please check this blog post, which shows how powerful this rule expression is, compared to decision trees.

To find more about the Waylay engine and internals, go to our documentation page or read the following blogs on the same subject: