Parking occupancy detection and car counting analytics

Using a Raspberry Pi and web camera, do periodic photo capture of cars on the parking and store the car count in the database

Parking occupancy detection and car counting analytics

Things used in this project

Hardware components

Raspberry Pi 4 Model B
Raspberry Pi 4 Model

Camera Module V2
Raspberry Pi Camera Module V2

Software apps and online services

Waylay TinyAutomator

Edge Impulse Studio
Edge Impulse Studio



Car park occupancy is one of the topics that is discussed a lot in the Smart Cities domain. First things first: you need to get data about existing car parking occupations. That can lead to some expensive systems with a lot of sensors, detectors, connections, etc.. But what if you can grab a cheap device (yeah, you can use some hype words like Edge device ;) ) and just try to use some web camera to capture images from your window periodically, do image recognition and then store the recognized number of cars on some time series database. You can say: “I should have some data science skills, Python, database access, write some data visualization code, maybe use some cloud services to store that data, …”

But actually you can do it with some basic knowledge of Javascript using the latest technologies on your Edge device. And all processing happens on that device without a need to contact the cloud services.

The goal

Using a Raspberry Pi and USB web camera, perform periodic photo capture of parking space, recognize the number of cars in parking, store that number on a time series database and have some view on that time series data.

The solution

First you need to have a car detection TinyML model based on pictures of a car parking. But luckily recently I found a blog post Car Parking Occupancy Detection Using Edge Impulse FOMO which has a publicly available EdgeImpulse project which can be imported to your EdgeImpulse account. So we will be using that model to do the car detection in our case.

Second, you need to have an image capturing/streaming service. For that we will use a simple shell script which captures the images using fswebcam utility, saves them to filesystem and sends an MQTT message to the TinyAutomator Mosquitto MQTT server with a filename.

We will generate a docker container with that shell script, so it can be run together with TinyAutomator docker containers. Image files will be shared via docker engine volumes.

And as a last step, will create an image classification Waylay Sensor, which will use the EdgeImpulse SDK to perform car recognition, and use that sensor to perform storage to TinyAutomator time series storage.

Data visualization is already part of TinyAutomator, we will immediately see the parking occupation over time.

Software setup

Installing Waylay TinyAutomator

There is a nice article on about the installation of TinyAutomator. You can follow the steps described in that article: Edge Computing with TinyAutomator in Industrial Applications

Installing the dependencies to run Edge Impulse

Register for a free account on the Edge Impulse platform here. Follow EdgeImpulse Raspberry Pi installation instructions.

Importing the TinyML Model

Login to your EdgeImpulse account. From a blog post mentioned earlier use the link to the public EdgeImpulse project in the browser and click “Clone this project” button. That will create a duplicate project on your own account, so you will be able to download and use it on your Raspberry Pi.

Connecting the device

To connect the Raspberry Pi to the Edge Impulse project, run the following command in the terminal:


If you have previously used your device in the past for other Edge Impulse projects, run the following command to reassign the device to a new project:

edge-impulse-linux --clean

If you have only one active project, the device will be automatically assigned to it. If you have multiple Edge Impulse projects, select in the terminal the desired one.

Give a recognizable name to your board and press enter.

Your board is now connected to the Edge Impulse project and you can see it in the connected devices panel.

Deploying the model on the Raspberry Pi

To run the inference on the target, use the following command:

edge-impulse-linux-runner --clean

and select the project containing the model you wish to deploy.

Once the model downloads, access the URL provided in your serial monitor to watch the video feed, in a browser. If you point the camera attached to the RaspberryPi to the parking place it should start recognizing cars on it. By this step you are ready to integrate with TinyAutomator for further processing.

You should also download and store the tinyML model on RaspberryPi for integration.

edge-impulse-linux-runner --download cars.eim

That will save cars.eim file on Raspberry Pi filesystem and you will need it to put to the docker shared volume.

Creating the image streaming container

A simple shell script that can be used to capture images periodically and send a message to MQTT broker of TinyAutomator:

#!/bin/shif [ -z "$VIDEO_DEVICE" ]; thenDEVICE_STR=""elseDEVICE_STR="-d $VIDEO_DEVICE"fiwhile truedofind $DIRECTORY -name "test.*.jpg" -mmin +1 -exec rm -rf {} \;TIME=$(date +%s)fswebcam -q -r 320x320 -S 3 -F 3 -D $DELAY $DEVICE_STR --no-banner --no-shadow --no-title $DIRECTORY/test.$TIME.jpg#raspistill -o test.jpgecho "took picture"JSON_FMT='{"image":"%s"}\n'JSON=$(printf "$JSON_FMT" "test.$TIME.jpg")mosquitto_pub -h $HOST -p $PORT -t $TOPIC -m $JSONdone
As you can see captured files are cleaned on timely bases and not sent over MQTT, so they are not available publicly, keeping them private on local folders.

A Dockerfile that can be used to create a docker image:

FROM debian:bullseye-slimRUN apt-get update && apt-get install -y --no-install-recommends \fswebcam mosquitto-clients \&& rm -rf /var/lib/apt/lists/*RUN mkdir -p /scriptsCOPY --chmod=755 /scriptsWORKDIR /scriptsENTRYPOINT []CMD ["/scripts/"]FROM debian:bullseye-slimRUN apt-get update && apt-get install -y --no-install-recommends \fswebcam mosquitto-clients \&& rm -rf /var/lib/apt/lists/*RUN mkdir -p /scriptsCOPY --chmod=755 /scriptsWORKDIR /scriptsENTRYPOINT []

You can build docker container and push it to, but there is already one which I pushed to our public repository: waylay/tinyautomator-streamer:latest

Adding image streaming service to TinyAutomator

TinyAutomator is actually a composition of docker services that are started using docker compose file. So you just add additional docker service to the tinyautomator-raspberrypi.yml file which defines/runs TinyAutomator services. Following changes should be done:

  • Add shared volume for image files in volumes section of tinyautomator-raspberrypi.yml:
volumes:  files:    name: tinyautomator-files
  • Add streaming service definition and adjust settings according to your camera. In my case the USB camera is attached to /dev/video1, if you use a RaspberryPi Camera it will use /dev/video0. Keep in mind that you should map your existing /dev/videoxxx devices to docker container. Also you can set DELAY to some other value. It is number of seconds between two image snapshots :
services:  stream:  image: waylay/tinyautomator-test:streamer  container_name: tinyautomator-stream  devices:    - "/dev/vchiq:/dev/vchiq"    - "/dev/video0:/dev/video0"    - "/dev/video1:/dev/video1"    - "/dev/video10:/dev/video10"    - "/dev/video11:/dev/video11"    - "/dev/video12:/dev/video12"    - "/dev/video13:/dev/video13"    - "/dev/video14:/dev/video14"    - "/dev/video15:/dev/video15"    - "/dev/video16:/dev/video16"    - "/dev/video18:/dev/video18"  environment:    - HOST=mosquitto    - PORT=1883    - TOPIC=stream    - VIDEO_DEVICE=/dev/video1    - DELAY=30    - DIRECTORY=/files  volumes:    - files:/files  restart: on-failure

  • Also adjust the sandbox service definition in tinyautomator-raspberrypi.yml file to mount files volume inside sandbox container:
volumes:  # configure the path to the storage on the host machine before :  - plugins:/sandbox/plugs  - files:/sandbox/files
  • Restart tinyautomator services:
pi@raspberrypi:~ $ docker-compose -f tinyautomator-raspberrypi.yml stoppi@raspberrypi:~ $ docker-compose -f tinyautomator-raspberrypi.yml up -d
  • Check stream service log files. You should see log lines with message “took picture”:
pi@raspberrypi:~ $ docker logs tinyautomator-stream…took picturetook picture
  • Also you should see files appearing in tinyautomator-files volume:
pi@raspberrypi:~ $ sudo ls -lat /var/lib/docker/volumes/tinyautomator-files/_datatotal 1856drwxr-xr-x 2 root root    4096 Apr 28 12:45 .-rw-r--r-- 1 root root  100760 Apr 28 12:45 test.1651146314.jpg-rw-r--r-- 1 root root   96252 Apr 28 12:45 test.1651146282.jpg

Point your camera to the parking place ! ;)

Creating a Waylay Sensor for image classification

Now you can go for next step and create a Waylay sensor - a Javascript NodeJS code which will use edgeImpulse NodeJS SDK ( to perform image classification which will bring as result the recognized object (cars in our case). There is a documentation at which explains how to develop a sensor plugin. Here is the code of the plugin which will do image classification. The link to the uploadable plugin via TinyAutomator console is here you can upload it using TinyAutomator web console in the “Plugins” section.

The sensor has two input parameters - modelfile (file which we downloaded using edge-impulse-linux-runner utility before) and an inputfile - the path to the captured image. Code is quite simple: it initializes the EdgeImpulse LinuxImpulseRunner object, resizing the image according to the model parameters (required image height/width of the model), does image classification and send result in JSON format back to the TinyAutomator engine, so it can be used in the further processing steps in TinyAutomator template. Here is the Javascript code:

// edge impulse plugin
const{  LinuxImpulseRunner } = require("edge-impulse-linux");
const sharp = require('sharp');
// This script expects two arguments:
// 1. The model file
// 2. The image file
// tslint:disable-next-line: no-floating-promises 
const { modelfile, imagefile } = options.requiredProperties 
async function execute () {
let runner;
try {
// Load the model
runner = new LinuxImpulseRunner(modelfile);
let model = await runner.init();
console.log('Starting the custom classifier for',
model.project.owner + ' / ' +, '(v' + model.project.deploy_version + ')');console.log('Parameters', 'freq', model.modelParameters.frequency + 'Hz','window length', ((model.modelParameters.input_features_count / model.modelParameters.frequency) * 1000) + 'ms.',
'classes', model.modelParameters.labels);
console.log('Starting the image classifier for',
model.project.owner + ' / ' +, '(v' + model.project.deploy_version + ')');
'image size', model.modelParameters.image_input_width + 'x' + model.modelParameters.image_input_height + ' px (' +model.modelParameters.image_channel_count + ' channels)',
let resized = resizeImage(model, imagefile).then(resized => {
runner.classify(resized.features).then(classifyRes =>
{const state='classified'
const classificationResult={};
const value = {
observedState: state,
rawData: {
result: classifyRes.result,
send(null, value);
await new Promise(resolve => setTimeout(resolve, 1000));
} catch (error) {
send(null, { observedState: 'error', rawData: { errorMessage: 'Failed to call url: ' + error } })
} finally {
await runner.stop();
}async function resizeImage(model, data) {
// resize image and add to frameQueue
let img;
let features = [];
if (model.modelParameters.image_channel_count === 3) {
img = sharp(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width,
let buffer = await img.raw().toBuffer();
for (let ix = 0; ix < buffer.length; ix += 3) {
let r = buffer[ix + 0];
let g = buffer[ix + 1];
let b = buffer[ix + 2];
// tslint:disable-next-line: no-bitwise
features.push((r << 16) + (g << 8) + b);
else {
img = sharp(data).resize({
height: model.modelParameters.image_input_height,
width: model.modelParameters.image_input_width
let buffer = await img.raw().toBuffer();
for (let p of buffer) {
// tslint:disable-next-line: no-bitwisefeatures.push((p << 16) + (p << 8) + p);
return {img: img,features: features

Creating a TinyAutomator template for storing the discovered number of cars

On TinyAutomator we need to create a template. Template is a definition of a processing flow. It will be triggered by incoming stream data on the MQTT broker. It will read the incoming file name, call a EdgeImpulseFoto sensor which we uploaded in previous step for objects recognition, based on classification result we will call a script sensor to calculate number of cars returned by classification and call storeMessage sensor to store current number of cars on a TinyAutomator resource(digital twin) which we will create and name it “parking1”. Last two sensors already exist on TinyAutomator out of the box. Here is the link to the template file . You can upload it using the TinyAutomator web console in the “Templates” section.

Creating a TinyAutomator template for storing the discovered number of cars

Copy EdgeImpulse model file to the docker volume

The downloaded cars.eim EdgeImpulse tinyML model should be copied to the tinyautomator-plugins volume as that one will be used by processing template. It also should be made executable:

pi@raspberrypi:~ $ sudo cp cars.eim /var/lib/docker/volumes/tinyautomator-plugins/_datapi@raspberrypi:~ $ sudo chmod +x /var/lib/docker/volumes/tinyautomator-plugins/_data/cars.eimpi@raspberrypi:~ $ sudo ls -alth /var/lib/docker/volumes/tinyautomator-plugins/_datatotal 46M-rwxr-xr-x 1 root root 9.3M Apr 25 12:49 cars.eimCreate a Parking resource

Go to TinyAutomator console and create a resource named “parking1”

Create a data ingestion for MQTT channel

Via the “Data Ingestion” section on TinyAutomator console create a connector for image files streaming.Use “MQTT Broker Subscription” integration and configure it with following parameters:Integration name: Mqtt (but can be named whatever you like)Topics: streamConnection string: mqtt://mosquittoResource property name: resourceClient ID: test-clientPort: 1883

Other parameters can be left empty. Once it is created and enabled you should see data coming from image streaming service:

Create a data ingestion for MQTT channel

Also the “stream” resource will be auto created. You can check it in the “Resources” section of TinyAutomator console.

Create a task

Go to the “Tasks” section of TinyAutomator console and create a task with following parameters:Name: car park countTemplate: choose “Car parking count” (that one was uploaded in one of previous steps)Resource: select “stream” from drop-downType: click on “Reactive” button

Click on “Next” button and fill modelname variable with value “cars.eim”

Click the “Create task” button. You should see the task start to work. You can check the log file of the task.

Create a task

Visualization of time series data

Once task starts to work the recognized number of cars will be stored in a time series database of TinyAutomator for resource named parking1 (this is configured in “car parking count” template as parameter for “storeMessage” sensor)

If you go to that resource and check the “data” section you will see the stored data lines.

Visualization of time series data

And you can also quickly explore it using the “explore” button.


You can easily do some simple and powerful automations using existing low code/no code solutions like EdgeImpulse ML and Waylay TinyAutomator just in a matter of hours. If you need help in deploying this solution or building something similar please contact for the low-code IoT Solution.


tinyautomator edgeimpulse demo