Quality Assurance with TinyAutomator
Computer Vision provides the relentless electronic eye that will watch thousands of components exit the production line in good shape. 👁️

Things used in this project
Hardware components

- UnitV2 M5Stack UnitV2 × 1
- Industrial Shields 012002000200 × 1
- Power supply - 24V, 1.5A × 1
- NORVI IIOT ESP32 Based Industrial Controller × 1
Software apps and online services
Hand tools and fabrication machines

Wire Stripper / Crimper, 10AWG - 20AWG Strip

Wire Stripper & Cutter, 18-10 AWG / 0.75-4mm² Capacity Wires
Story
Motivation
The role of quality assurance is to ensure that the final product matches the company's quality standards. Factory end-products usually consist of assemblies of smaller sub-assemblies.
By introducing additional nodes of QA in a production line, the overall efficiency of the facility increases.
Before starting out building your own Quality Assurance solution, make sure to walk through our Introduction to TinyAutomator tutorial as it will give you a bird's-eye perspective regarding the reasons why we picked it for our solution, what features it has and how to use its basic functions efficiently.
Our use case 🏭
In our case, we are monitoring an Injection moulding machine that creates polypropylene fittings. Due to various reasons, those fittings may end up with various defects specific to injection moulding, like flow lines, sink marks, warping, and others. In this tutorial, we will recognize the short filling of a certain fitting, as seen in the pictures below.
Of course, this has to be eliminated from the packaging line, the machine has a basic weighting mechanism, but sometimes it cannot detect some defects on that alone.

1 / 2
Hardware requirements 🧰
- Industrial Shields RPI based PLC - running a TinyAutomator instance (Industrial Shields 012002000200);
- Power supply - 24V, 1.5A;
- M5Stack UnitV2 - The standalone AI Camera for Edge Computing (SSD202D) TinyML;
- WiFi-enabled relay module - We've used a NORVI IIOT ESP32 Based Industrial Controller (NORVI-IIOT-AE02-V).
Computer vision 📷
Setting up the M5stack UnitV2 camera
First thing first, power up your camera by connecting it to your PC via a USB-C cable. The driver installation varies, depending on your operating system and a thorough setup guide can be found in the official documentation.
If you are using a Linux-based machine, there is no driver installation required as the camera is automatically recognized.
Once the connection is successful, open your browser of choice and access 10.254.239.1, the static IP of the camera, which will lead you to the control interface.
The user interface gives you access to all the functioning modes of the UnitV2, as well as provides you with a continuous camera stream alongside the serial output of the camera.
Once the camera is set up and the training is done, it can be powered through the USB-C cable and run independently, without it being connected to the PC. You can connect to the camera remotely using SSH. In the M5Stack documentation you can find details about how to access the device as root:
The Online Classifier
For our application, we will be using the online classifier mode. While using the Online Classifier, you can train and classify the objects in the green target frame in real-time, and the feature values obtained from training can be stored on the device for the next boot.
For reliable results, you need at least 100 good photos of the features you intend to classify, in all the possible positions. For best results, we recommend having good repeatability of the system that places the objects in front of the camera.
Training the model

Under the Online Classifier tab, you will notice a checkbox list. This allows us to define the features we wish to identify and train the model accordingly. In the middle of the screen, you can observe the live camera stream and the bounding box in which we will be placing the feature to be recognized and on the right side, you can see the serial output of the camera in JSON format.
Because we are training a model from scratch, first click the reset button, rename the class with the feature you want to identify and click the checkbox next to it. Next, place the object inside the green bounding box and press the train button. Congrats! You just recorded your first data point. Keep doing this until you have at least 100 good pictures with the feature.

Next, click on the add button, rename the new class to something on the lines of no_defect and click on the checkbox next to it. Now, we will be training the model to recognize a proper object. Just like before, take at least 100 good photos. Next, click on the add button, rename the new class something on the lines of no_defect and click on the checkbox next to it. Now, we will be training the model to recognize a proper object. Just like before, take at least 100 good photos.

Finally, we must take at least 50 good photos of the background against which the objects are presented. We strongly suggest that the background is a static one and if possible, provide a uniformly coloured-background (something along the line of a big piece of cardboard).
Model execution
Once the training is done, click Save and run and the model is saved on the UnitV2. If you wish to further add new samples to your model, simply click on the checkbox corresponding to the feature you wish to train and keep adding data points to it. When you are done, clicking on save and run will update your model.

Once the model is running on the UnitV2, the value corresponding to the best_match key is the result of the analysis.
Data gathering 📊

Fundamentally, the system we have devised for this use case employs the UnitV2 camera to monitor the production line and, in the case, if a defective item is detected, it sends a message via MQTT to TinyAutomator, where a task gathers the data stream, passes it through a set of rules and sends an MQTT message to a Norvi IIot that triggers a relay linked to an actuator that disposes of the defectuous item.
To integrate the M5Stack UnitV2 with TinyAutomator, we had to modify the firmware of the camera to filter out the relevant value from the JSON and send the result via MQTT to a certain topic. The first thing we had to do was to integrate the Paho MQTT Client library:
We've also declared some variables to be used for the detection:
Here is a very basic detection code based on the labels we added in the previous step that will automatically send a message on state change via MQTT:
We also added these three lines in the server_core.py file at line 893 for the automatic change of the detection type:
You can find the complete server_core.py file in the Github repository.
Actuator control ⚙️
Using the template editor of TinyAutomator, we have created a flow in which it listens on a certain MQTT topic and if the message received corresponds to a defective object, it sends an MQTT message to an actuator. Additionally, every time a defective object is identified, a counter is incremented so we can keep track of the total number of manufacturing failures.

To replicate this task, go to Templates, click the arrow next to the Add template button, then click Upload template and add the following template that we have created that you can adjust according to your needs (the QA_Example.json file from the Github repository). After the template is created, click on Create Task, give it an appropriate name, select your resource, make sure Reactive is checked and click Create Task.
When the camera detects a defect it sends a "2" on the MQTT topic that goes through Tiny Automator. It gets added to the counter and in response, it publishes a "2" on the "inTopic" that is the input for the actuator, the Norvi setup, that will activate a relay, in our case a air compressor jet that will push the part to the defect bin storage.
Here is the code for the Norvi relay trigger. Be sure to put the WiFi credentials and Tiny Automator IP link that is also the MQTT broker IP.
Here is the injection moulding machine in action:
What’s next? 🚀
While this example is a basic Computer Vision detection with only one camera that will need to be carefully placed on the production line in the area with most issues you could add more cameras to cover more angles if needed based on the dimensions of your produced parts.
And even so, Computer Vision is not limited to Quality Assurance alone, there are many issues in a Factory that could be tackled by an electronic eye once it's properly trained and deployed.
We have other tutorials from which you can learn to use TinyAutomator for industrial use-cases:
If you need help in deploying this solution or building something similar please contact Waylay.io for the low-code IoT Solution or Zalmotek.com for IoT-enabled hardware prototypes.