A circular security camera mounted on a wall

Computer vision is an incredibly fast growing field, and recent developments have made it possible to quickly start experimenting with almost no previous experience. In this post we’ll show you how to set up a practical computer vision analytic system using the $99 Nvidia Jetson Nano Developers Kit, running OpenDataCam. With OpenDataCam we can recognise, track and count people and a variety of vehicles from a USB webcam feed. We will send the collected data to InfluxDB for visualization and analysis, and set up the Jetson Nano for remote operation and management using a cellular modem and a Hologram SIM.

The video below shows a recording of a live field test I did outside a filling station, counting vehicles and pedestrians travelling in different directions.


To get started, you will need the following hardware:

  • Jetson Nano Developers kit
  • 5V 4A Power Supply with barrel connector
  • Micro SD Card, class 10 or better, 64 GB recommended
  • Wi-Fi adaptor (Edimax N150 or Intel 8265.NGWMG M.2 card recommended) or Ethernet cable
  • USB Webcam
  • 4G Modem - USB or Raspberry Pi Hat (Tested with D-Link DWM-222)
  • Hologram SIM Card
  • HDMI monitor
  • USB Mouse and Keyboard

How to set up the Nvidia Jetson Nano

The Nvidia Jetson Nano is a powerful little single board computer with a GPU for running neural networks to do things like image classification, object detection or speech processing. It’s size and low cost means that it’s perfect for edge computing applications like retail analytics or various industrial uses. Combined with a cellular modem and USB webcam, it allows for quick and easy remote deployments, with the only field requirement being a power supply.

The Jetson Nano has no shortage of documentation and online resources, with Nvidia really having put effort into making machine learning approachable for almost anyone. It’s possible to get a basic object demo running in 10 lines of python code.

  1. Download the latest SD card image for the Jetson Nano
  2. Flash the image to the SD Card from your computer. On a Windows PC I prefer BalenaEtcher. If you haven’t done this before, complete instructions are on the Nvidia website.
  3. If you're using the barrel socket power supply, place a jumper on the J48 pins, just behind the barrel socket. This deactivates the micro USB port as a power supply.
  4. Insert the micro SD card into the slot on the Nano
  5. Plug in the power supply, mouse, keyboard, and Wi-Fi adaptor or network cable

The Jetson should now start up, and direct you through the system and network configuration, and login credentials set up. On the user setup page select the “Log in automatically” to allow all services to start without user intervention.

Once done, you should see the desktop. Open a terminal window and do the following:

Check for updates, and update all packages. This might take a while.

sudo apt update && sudo apt upgrade

Install the Nano text editor and curl

sudo apt install nano curl

For deployment a cellular connection is really convenient, but for testing and initial setup and ethernet or WiFi connection is quicker to set up. If you are using a Wi-Fi adaptor for testing, disable Wi-Fi power management to improve connection stability.

sudo nano /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf

It will open a file with the following file

[connection] wifi.powersave = 3

Disable Wi-Fi power saving by changing 3 to 2.

Get the IP address of your Jetson Nano by running


Restart the Jetson before continuing. Any command line operation from this point forward can be done either directly on the Nano with the keyboard, mouse and monitor, or you can use SSH from any computer on the same network. 

Now that you have a running Nano, it’s time to jump into the computer vision software.

How to Install OpenDataCam

OpenDataCam is an open source tool for computer vision analytics, that can track and count objects in almost any video feed. It is probably the easiest to use and setup tool that I have seen for this purpose. It is licensed under the permissive MIT license, which allows for use in commercial products.

OpenDataCam runs on the Docker platform, and requires access to CUDA, Nvidia’s tool for running parallel processing tasks on GPUs. First we need to make sure CUDA is defined in the PATH on the Nano, by editing the .bashrc file

sudo nano .bashrc

Add the following two lines to the end of the file, then save and close it.

export PATH=${PATH}:/usr/local/cuda/bin export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64

Increase swap partition size of the Jetson to 6 GB to improve performance and reliability

git clone https://github.com/JetsonHacksNano/installSwapfile cd installSwapfile chmod 777 installSwapfile.sh ./installSwapfile.sh

Install Docker-compose and dependencies

sudo apt install -y python3-pip libssl-dev libffi-dev python-openssl sudo pip3 install docker-compose

Allow docker to run on startup

sudo systemctl enable docker

Install OpenDataCam. You will be asked for your sudo password during this process, and it may take a while.

mkdir ~/opendatacam cd ~/opendatacam wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.1/docker/install-opendatacam.sh chmod 777 install-opendatacam.sh ./install-opendatacam.sh --platform nano

How to configure OpenDataCam

Once the installation is complete, open Chromium and go to localhost:1880 from the Jetson, or *JetsonIP*:1880 from any computer on the same network. Once OpenDataCam has started, you will see the following video feed. It is a demo file included in OpenDataCam and demonstrates its object detection capabilities. We’ll use this as an example to get familiar with the interface before changing the video feed for our specific use case.

Click on the “Pathfinder” button in the upper left corner, and you will see the “tracks” generated by each car as it’s identified and tracked.

To count the vehicles, click the “Counter” button to add counting lines as shown below. These lines act as checkpoints, counting objects that pass over them. In the example below, I’ve added lines for oncoming, leaving, and crossing traffic. You can also toggle the direction of travel for objects to be counted, by clicking on the arrow in the centre of the line. To start counting, click the “Start recording” button.

To increase the reliability of the counters, it is important to place them in areas with high detection confidence. By clicking the hamburger menu in the upper right-hand corner, you can activate the tracker accuracy heatmap, which will highlight the areas with the lowest detection confidence levels.

This means that you should avoid these areas when placing counting lines. It might also be a good idea to move the camera to a different perspective, or even improve the detection model using transfer learning.

If you have a sample video file that you want to test, you can simply drag and drop it into the OpenDataCam window and it will start playing the new file.

To use a webcam or IP camera stream, you need to edit the config.json file in ~/opendatacam/ to specify the desired video source. Complete details are available on the OpenDataCam Github page, including all the other settings you can change. For now will stick to the demo file while we link all the different parts of the project together, and I will describe the final setup I did for deployment at the end of the post.

To load and updated config file, restart the Docker container

sudo docker-compose restart

How to Install Node-RED for Cloud Data Collection and Analysis

I want to collect traffic data in set intervals and store it for analysis. InfluxDB is a database solution built specifically for time series data, which is exactly what we get from OpenDataCam. It also has some built-in visualisation tools. You can install and run InfluxDB locally on the Jetson, but in a production environment we will likely have multiple sources of data, so sending it to the cloud for analysis makes more sense.

OpenDataCam provides a simple but effective API for interacting with it and extracting data, but we need to create a simple app to do this. I’ll be using Node-RED, flow-based GUI wrapper for Node.js, which also allows us to see at a glance how data flows through the app, and quickly make changes

The above screenshot gives you a good idea of how the flow works. First it checks the status of OpenDataCam (ODC), and starts it if it is not running yet, and if no recording is active, it starts one. If a recording is running, it retrieves the recording and then stop it, and immediately starts a new recording. The data from the completed recording is retrieved, and sent to InfluxDB.

This process is repeated at intervals set in the blue timestamp inject node. The dark green nodes provide output for debugging purposes.

To install Node-RED, open a terminal on your Jetson, and run:

bash (curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)

Install additional nodes for easy interaction with InfluxDB

cd ~/.node-red && npm install node-red-contrib-stackhero-influxdb-v2

We want to let Node-RED start automatically when the Jetson starts

sudo systemctl enable nodered.service

Start Node-RED as a background service


Open the Node-RED UI by going to localhost:1880 from the Jetson, or *JetsonIP*:1880 from any computer on the same network.

To import the flow, click the menu icon in the upper right corner, and select Import.

Copy the flow code from this GitHub repo, paste it into the import window, and click Import. The flow will now show in your Node-RED editor. Before we can deploy it, we need to set up InfluxDB to receive data, and get its authentication details.

How to configure InfluxDB Cloud

Now we need to set up a InfluxDB instance to receive the data. First go to the InfluxDB Cloud page, register for a free account, and create an instance on the cloud platform of your choice. I used AWS. 

Once the instance is created and you are logged in, go to the Buckets tab on the Data page, and click Create Bucket, and give your new bucket a name. 

With the bucket created, we need to get the authentication data that Node-RED will use to write data to it. Go to the Data>Tokens page. Click on Generate and select Read/Write Token. Select the bucket you created for both read and write, and give the token a name

This token will be used to configure Node-RED

We will also need a  URL for our InfluxDB instance, which can be found under Data>Client Libraries tab

Now we can configure the InfluxDB node on Node-RED. Go to the Node-RED flow that you imported in the previous section, open the InfluxDB Write node, and click the pencil icon to configure the database server details.

Host: The URL you copied from InfluxDB

Token: The token number that you created on InfluxDB

Organisation: The email address you used to create the InfluxDB instance

Default Bucket: The name of the InfluxDB bucket you created.

Once you have configured the node, you can deploy the Node-RED flow by clicking Deploy in the upper right-hand corner. This will cause it to start sending data to InfluxDB, which you can confirm by checking for errors in the debug messages of Node-RED.

Go back to InfluxDB and open the Explore page. In the query builder at the bottom of the page, select your bucket in the From block, select View Raw Data and click Submit. If data has been successfully been written, you should see a table with the counts we collected and other metadata. We’ll be using these metadata values to filter data and create graphs. If you want to learn more about how data is organized in InfluxDB, take a look at the documentation.

Now go to the Boards page of InfluxDB, click Create Dashboard, and select New Dashboard from the dropdown menu. This will create the dashboard, which you should now give a name at the top of the window. Click Add Cell just below it to create your first graph, for which we now have to create a query. In the query window at the bottom of the window, set up the query as shown below.

This will add the first line to the graph showing the count for all the traffic that crossed the intersection in the video feed. Create a new query by clicking the plus icon above the query builder, and do exactly the same, except choose one of the other areaNames. Also do the same for remaining areaName.

This should give you a graph like this, with each line representing another counting area.

You can display and filter your data in a variety of ways of just selecting different options in the query builder.

Remote Access to Nvidia Jetson Nano Setup

Managing remote devices can be a real pain, and to get around this we’ll be using Remote.it, a service for quickly and easily setting up secure remote access to the Nano. Go to the Remote.it website and create a free account.

Now open a terminal window on your Jetson Nano, and install Remote.it

curl -LkO https://raw.githubusercontent.com/remoteit/installer/master/scripts/auto-install.sh chmod +x ./auto-install.sh sudo ./auto-install.sh

If you don't see any warnings, you can now configure Remote.it

sudo connectd_installer

This will guide you through registering the device on Remote.it, and then configuring the remote links to the applications we need.

  • SSH
  • OpenDataCam video feed, port 8090 (Custom TCP)
  • NodeRED, port 1880 (Custom TCP)

Once configuration is done, you can get the connection details for each app on a device from the RemoteIt, allowing us to connect the SSH, OpenDatacam and Node-RED from anywhere. You can do this either from the Remote.it website, or you can download the desktop application.

Unfortunately the complete OpenDataCam config page (port 8080) does not work over the Remote.it link, so initial config must be done over WiFi or wired connection.

How to set up the 4G LTE Dongle

We’ll be using a Hologram SIM to provide cellular connectivity without needing to worry about carrier coverage, or Wi-Fi connections when deployed. Plug the 4G dongle with SIM into one of the USB ports on the Nano.

Restart the Jetson Nano then run the following command to check that your modem is detected

sudo mmcli -L

I had to follow some additional steps to get the Jetson to detect the D-Link DWM-222 that I used, which are described on the project GitHub page. If you have trouble with another modem, you might also find the steps I took helpful.

nmcli c add type gsm ifname '*' con-name apn

Add the connection and APN details

<name></name>: Any name for connection

<operator_apn>: </operator_apn>APN

My command ender looking like this

nmcli c add type gsm ifname '*' con-name mobile apn hologram

Activate the connection

nmcli r wwan on

The connection is now ready to use, and will connect automatically in the future when the modem is plugged in. To test the connection, remove the WiFi modem and check that you can still access Node-RED, OpenDataCam and SSH using Remote.it.

Field Testing

For field testing, I deployed the Jetson Nano with a USB webcam at a local filling station, to count passing traffic, as shown in the recorded video at the top of the post. For setup, I still like to use WiFi, but creating a hotspot on my laptop or on the Jetson, but allowing data to be sent out via the LTE modem.

A small USB web cam with a long cable allows for easy deployment. You don’t need an HD video feed for good results, since a high resolution feed reduces the frame rate at which OpenDataCam can operate. I used 800x600 resolution, and a 640x480 will also work well. For more details on my specific camera settings, take a look at the project GitHub page.

The screenshot below shows OpenDataCam detecting both vehicles and pedestrians. Factors that could reduce counting accuracy include partially obscured objects in dense traffic, suboptimal placement of counting lines, distant objects and poor lighting. Take the time to optimise camera setup, and you will be rewarded with accurate results.


Using OpenDataCam on the Jetson Nano is an extremely cost effective solution for counting pedestrians and vehicles. Be sure to check the OpenDataCam documentation to see all the possible features.  It is perfect for retail analytics in shops, and also for pedestrian and vehicle traffic monitoring to assess the marketing options.

Get online in under a week

Start conquering the entire globe with a simple SIM swap. We’re here every step of the way.