This is an exercise in imagining the current prevalent IoT architecture vision as the result of an evolutionary process, in order to explain why most IoT architectures today are very similar to each other. They all follow a “net fishing” model which is why they face similar challenges when dealing with certain issues that real-life IoT use cases are increasingly raising and which require a “pole fishing” model approach.

Phase one – connecting

First, your smart-sensor-enabled-network-connected things begin sending information about themselves and their environment to your central hub in the cloud. Connecting things (giving them senses and opening up an internet connection to them so that they can send their data through to you) represents the dawn of the evolution of IoT. Most IoT platforms out there are making a living making sure your things can safely do that.

Phase two – analysing and visualising

Next, as data from your things piles up and you have so much of it you start calling it “big”, you aggregate, explore and begin running intelligent analytics on your data piles and visualise results on dashboards. This is the second stage in the evolution of IoT, when you learn important new things about your systems of connected things.

Phase three – automating

Now that you have learned something, you start thinking of applying what you have learned to your existing processes, so that you can finally reap the benefits of IoT in its third major evolutionary stage – automation. And as you do, you come to face a problem that we want to address in this post. It has to do with the counter-intuitive concept that one-by-one might actually work better for massive volumes than “en masse” does.

Common sense tells us that dealing with work “en masse” is always more efficient than dealing with individual pieces one by one. This goes for highly diverse systems such as public transport, fruit picking, fishing or book selling on Amazon. The same logic applies for all, grouping things together and then applying the same rule on the grouped bunch is more efficient that dealing with one entity at a time. Following this logic, the same should be true for IoT, since the only new variable it adds to this optimization formula is that the things in the system are now connected to the internet, everything else remained pretty much the same.

The three phase evolution converges into a natural IoT architecture

Following these three evolutionary phases of IoT, a naturally-evolved IoT architecture emerges: collecting data from connected objects, processing it through streams processing engines (applying rules), followed by storing metrics in data lakes and finally doing offline analytics and data visualization.

So it looks like the “internet of everything” can be achieved by connecting things, services and people with one linearly optimized pipeline. This is why most IoT architecture slides, from Azure, Amazon, Google, IBM, SAP and many others, look pretty much the same. It is not surprising, as they all follow the same vision built on the natural three stages of the evolution of IoT.

This is Azure:

Source: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-live-data-visualization-in-power-bi

This is Google:

Source:https://cloud.google.com/solutions/architecture/real-time-stream-processing-iot

This is SAP:

Source: https://eaexplorer.hana.ondemand.com

The IoT evolutionary hurdle ( the uh-oh moment)

IoT has always been presented as a volume game – there are many things out there waiting to be connected, many, many more than there are people. So coming back to our initial premise of optimisation through automated volume handling- it would look like there’s no better application of our premise than IoT – the ultimate “en masse” deal. Right?

Well, not quite. Not always.

As it turns out, for some IoT applications (increasingly more) when it comes to getting most business value out of your data insights, even though you are dealing with massive volumes, applying rules en masse doesn’t work.

Let’s take smart meters as a simple example, where as a utility leveraging IoT to innovate customer service, you may want to enable each customer to set their own alerting threshold for consumption.You are indeed dealing with big device volumes, hundreds of thousands of metres, but you are also dealing with consumers who are expecting customized service. For this application, “en masse” is not going to work. Or let’s imagine that after you run your analytics, you discover that in order to optimise for cost savings or increased efficiency, certain meters need to be handled differently. Either because you need to work with metres that come from different manufacturers and have slight factory variations or simply because different metres require different rules to be applied while in operation. Again, “en masse” is not going to work for you in this particular case. In fact, when looking at consumer IoT, we can see quite a growing number of use cases where you want your intelligent statistics insights applied back into the rule, at individual level. Think healthcare & wellness, think insurance, think security.

So you are now facing the hard reality of having to apply different rules at the device level or device-type level in a volume game of millions of devices and dealing “en masse” is not an option for you. You’ve hit the uh-oh moment. What’s more, often applying rules for your particular business case requires that “connecting things, services and people” happens at the very time that the rule is being applied.

You hit an evolutionary hurdle. The naturally evolved architecture breaks down.

The IoT platform of the future, the Waylay vision of the pole fishing model

So we have seen that current IoT platforms are based on the stream pipes that tap into IoT device cloud hubs in order to apply global rules on everything that comes through. We call this the net fishing model – you catch all the fish in your sea lot in one big net, irrespective of fish differences.

We propose a more granular approach, what we call the pole fishing model, where you go after each fish (device and device data) individually. This model allows the templating of generic rules and then instantiating them either on a per device or per device group level. Instantiating on a per device basis is key. It’s what allows you to have per-device settings, thresholds as well as diagnostics. This is what allows users to configure personalised rules. In practice, if you have 100,000 devices you will have 100,000 rules as opposed to having one global rule applicable “en masse” for all 100,000 devices. It’s what works best for use cases such as our example above, where as a utility you have to manage hundreds of thousands of metres but at the same time you can now enable customised service per consumer as well.

We have written in more detail about how the Waylay engine works and how it is different from other rules engines in this blog post here and we will be writing more about its unique integration capabilities (another key feature enabling the pole fishing model) in a future post.

Download our solution white paper to discover more about our next-generation orchestration platform.