Quality Assurance with TinyAutomator

Computer Vision provides the relentless electronic eye that will watch thousands of components exit the production line in good shape. 👁️

Quality Assurance with TinyAutomator

Things used in this project

Hardware components

  • UnitV2 M5Stack UnitV2                                     × 1
  • Industrial Shields 012002000200                      × 1
  • Power supply - 24V, 1.5A                                    × 1
  • NORVI IIOT ESP32 Based Industrial Controller × 1

Software apps and online services

Hand tools and fabrication machines

Wire Stripper / Crimper, 10AWG - 20AWG Strip

Wire Stripper & Cutter, 18-10 AWG / 0.75-4mm² Capacity Wires



The role of quality assurance is to ensure that the final product matches the company's quality standards. Factory end-products usually consist of assemblies of smaller sub-assemblies.

By introducing additional nodes of QA in a production line, the overall efficiency of the facility increases.

Before starting out building your own Quality Assurance solution, make sure to walk through our Introduction to TinyAutomator tutorial as it will give you a bird's-eye perspective regarding the reasons why we picked it for our solution, what features it has and how to use its basic functions efficiently.

Our use case 🏭

In our case, we are monitoring an Injection moulding machine that creates polypropylene fittings. Due to various reasons, those fittings may end up with various defects specific to injection moulding, like flow lines, sink marks, warping, and others. In this tutorial, we will recognize the short filling of a certain fitting, as seen in the pictures below.

Of course, this has to be eliminated from the packaging line, the machine has a basic weighting mechanism, but sometimes it cannot detect some defects on that alone.

                                                                                                                                                           1 / 2

Hardware requirements 🧰

Computer vision 📷

Setting up the M5stack UnitV2 camera

First thing first, power up your camera by connecting it to your PC via a USB-C cable. The driver installation varies, depending on your operating system and a thorough setup guide can be found in the official documentation.

If you are using a Linux-based machine, there is no driver installation required as the camera is automatically recognized.

Once the connection is successful, open your browser of choice and access, the static IP of the camera, which will lead you to the control interface.

The user interface gives you access to all the functioning modes of the UnitV2, as well as provides you with a continuous camera stream alongside the serial output of the camera.

Once the camera is set up and the training is done, it can be powered through the USB-C cable and run independently, without it being connected to the PC. You can connect to the camera remotely using SSH. In the M5Stack documentation you can find details about how to access the device as root:

ssh m5stack@ m5stack//pwd: 12345678//user: root//pwd: 7d219bec161177ba75689e71edc1835422b87be17bf92c3ff527b35052bf7d1f

The Online Classifier

For our application, we will be using the online classifier mode. While using the Online Classifier, you can train and classify the objects in the green target frame in real-time, and the feature values obtained from training can be stored on the device for the next boot.

For reliable results, you need at least 100 good photos of the features you intend to classify, in all the possible positions. For best results, we recommend having good repeatability of the system that places the objects in front of the camera.

Training the model

Under the Online Classifier tab, you will notice a checkbox list. This allows us to define the features we wish to identify and train the model accordingly. In the middle of the screen, you can observe the live camera stream and the bounding box in which we will be placing the feature to be recognized and on the right side, you can see the serial output of the camera in JSON format.

Because we are training a model from scratch, first click the reset button, rename the class with the feature you want to identify and click the checkbox next to it. Next, place the object inside the green bounding box and press the train button. Congrats! You just recorded your first data point. Keep doing this until you have at least 100 good pictures with the feature.

Next, click on the add button, rename the new class to something on the lines of no_defect and click on the checkbox next to it. Now, we will be training the model to recognize a proper object. Just like before, take at least 100 good photos. Next, click on the add button, rename the new class something on the lines of no_defect and click on the checkbox next to it. Now, we will be training the model to recognize a proper object. Just like before, take at least 100 good photos.

Finally, we must take at least 50 good photos of the background against which the objects are presented. We strongly suggest that the background is a static one and if possible, provide a uniformly coloured-background (something along the line of a big piece of cardboard).

Model execution

Once the training is done, click Save and run and the model is saved on the UnitV2. If you wish to further add new samples to your model, simply click on the checkbox corresponding to the feature you wish to train and keep adding data points to it. When you are done, clicking on save and run will update your model.

1 / 2

Once the model is running on the UnitV2, the value corresponding to the best_match key is the result of the analysis.

Data gathering 📊

Fundamentally, the system we have devised for this use case employs the UnitV2 camera to monitor the production line and, in the case, if a defective item is detected, it sends a message via MQTT to TinyAutomator, where a task gathers the data stream, passes it through a set of rules and sends an MQTT message to a Norvi IIot that triggers a relay linked to an actuator that disposes of the defectuous item.

To integrate the M5Stack UnitV2 with TinyAutomator, we had to modify the firmware of the camera to filter out the relevant value from the JSON and send the result via MQTT to a certain topic. The first thing we had to do was to integrate the Paho MQTT Client library:

import paho.mqtt.client as mqttClient
import time
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to broker")
global Connected                #Use global variable
Connected = True                #Signal connection
print("Connection failed")
Connected = False   #global variable for the state of the connection
broker_address= ""
port = 1883
client = mqttClient.Client("Camera Detection")               #create new instance
#client.username_pw_set(user, password=password)    #set username and password
client.on_connect= on_connect                      #attach function to callback
client.connect(broker_address, port=port)          #connect to broker
client.loop_start()        #start the loop

We've also declared some variables to be used for the detection:

defectDetected = False
noDefectDetected = False
backgroundDetected = False

Here is a very basic detection code based on the labels we added in the previous step that will automatically send a message on state change via MQTT:

global backgroundDetected
global noGasketDetected
global gasketDetected
if str(doc["best_match"]) == "background" and not backgroundDetected:
client.publish("test-topic",json.dumps({"value",0}) )
backgroundDetected = True
defectDetected = False 
noDefectDetected = False
if str(doc["best_match"]) == "no_defect" and not noDefectDetected:
backgroundDetected = False
defectDetected = False
noDefectDetected = True
if str(doc["best_match"]) == "defect" and not defectDetected:
client.publish("test-topic", json.dumps({"value":2}))
backgroundDetected = False
defectDetected = True
noDefectDetected = False

We also added these three lines in the server_core.py file at line 893 for the automatic change of the detection type:

protocol.write("{\msg\":\"Waiting 5 seconds for the server to start.\"}\r\n".

You can find the complete server_core.py file in the Github repository.

Actuator control ⚙️

Using the template editor of TinyAutomator, we have created a flow in which it listens on a certain MQTT topic and if the message received corresponds to a defective object, it sends an MQTT message to an actuator. Additionally, every time a defective object is identified, a counter is incremented so we can keep track of the total number of manufacturing failures.

To replicate this task, go to Templates, click the arrow next to the Add template button, then click Upload template and add the following template that we have created that you can adjust according to your needs (the QA_Example.json file from the Github repository). After the template is created, click on Create Task, give it an appropriate name, select your resource, make sure Reactive is checked and click Create Task.

When the camera detects a defect it sends a "2" on the MQTT topic that goes through Tiny Automator. It gets added to the counter and in response, it publishes a "2" on the "inTopic" that is the input for the actuator, the Norvi setup, that will activate a relay, in our case a air compressor jet that will push the part to the defect bin storage.

Here is the code for the Norvi relay trigger. Be sure to put the WiFi credentials and Tiny Automator IP link that is also the MQTT broker IP.

//Update these with values suitable for your network.
const char* ssid = "WifiSSID";
const char* password = "WifiPWd";
const char* mqtt_server = "IpAddressMqttTinyAutomator";
WiFiClient espClient;
PubSubClient client(espClient);
unsigned long lastMsg = 0;
#define MSG_BUFFER_SIZE  (50)
char msg[MSG_BUFFER_SIZE];
int value = 0;
#define LED_pin  12
void setup_wifi() {
 // We start by connecting to a WiFi network
 Serial.print("Connecting to ");
 WiFi.begin(ssid, password);
 while (WiFi.status() != WL_CONNECTED) {
 Serial.println("WiFi connected");
 Serial.println("IP address: ");

void callback(char* topic, byte* payload, unsigned int length) {
 Serial.print("Message arrived [");
 Serial.print(topic); Serial.print("] ");
 for (int i = 0; i < length; i++) {
 // Switch on the LED if an 1 was received as first character 
 if ((char)payload[0] == '2') {
   digitalWrite(LED_pin, HIGH);
   digitalWrite(LED_pin, LOW);
void reconnect() {
 // Loop until we're reconnected
 while (!client.connected()) {
   Serial.print("Attempting MQTT connection...");
   // Create a random client ID
   String clientId = "ESP32Client-";
   clientId += String(random(0xffff), HEX);
   // Attempt to connect
   if (client.connect(clientId.c_str())) {
     // ... and resubscribe
   } else {
    Serial.print("failed, rc=");
    Serial.println(" try again in 5 seconds");
    // Wait 5 seconds before retrying

void setup() {
 pinMode(LED_pin, OUTPUT);
 client.setServer(mqtt_server, 1883);

void loop() {
 if (!client.connected()) { 

Here is the injection moulding machine in action:

What’s next? 🚀

While this example is a basic Computer Vision detection with only one camera that will need to be carefully placed on the production line in the area with most issues you could add more cameras to cover more angles if needed based on the dimensions of your produced parts.

And even so, Computer Vision is not limited to Quality Assurance alone, there are many issues in a Factory that could be tackled by an electronic eye once it's properly trained and deployed.

We have other tutorials from which you can learn to use TinyAutomator for industrial use-cases:

If you need help in deploying this solution or building something similar please contact Waylay.io for the low-code IoT Solution or Zalmotek.com for IoT-enabled hardware prototypes.


Github Repository

Zalmotek / Quality_assurance_with_tiny_automator