Codementor Events

Build a Backend for IoT Projects and Set Up a CI/CD Pipeline with Docker

Published Mar 27, 2017
Build a Backend for IoT Projects and Set Up a CI/CD Pipeline with Docker

TL;DR

IoT devices are everywhere and available at a ridiculously low cost. For example, you can purchase the new Raspberry PI Zero W, which has some really good specifications, for less than 10 dollars. As IoT projects continue to flourish, we will discuss the details of creating a simple backend that collects data sent by IoT devices. To build this backend, we will use Node.js, Docker, InfluxDB, and Grafana. We will deploy it on DigitalOcean through Docker Cloud.

Usage

We will illustrate our backend with temperature data from a simulator. This project can serve as a basis to build IoT applications that use real world devices (e.g. Raspberry Pi, Orange Pi, C.H.I.P) to send almost any kind of timestamped data. iOS or Android applications can also send data to our backend.

We can imagine applications such as:

  • cold chain monitoring (sending temperature)
  • environmental dashboard (sending temperature, humidity, …)
  • energy efficiency monitoring (sending real time electricity consumption)
  • alarm (sending door status)
  • driving monitoring (sending real time speed)
  • swimming pool water level
  • and more!

The project

This backend application is made up of three services:

  • an API in Node.js
  • an InfluxDB database
  • a Grafana dashboard for data visualization

The API receives data over an HTTP POST request and saves it in the underlying InfluxDB database. A dashboard built with Grafana enables us to visualize data from the database.

Note: real world devices might use a lower level protocol, such as TCP or UDP, to limit the size of the frame.

As the backend collects data, we will first create a simple data simulator in bash. In the next article we will focus on the realization of a device based on the Raspberry PI (or another ARM device) sending real world data.

The steps we will cover:

All the steps we will go through in this article can be found on GitHub in the IoT demo project repository. A Node.js example implementation of the API is in the IoT API repository.

Those GitHub repositories are used as projects to illustrate the Docker In-Depth for Devs and Ops online course.

Note: the current backend version is not production ready, but can be used as a starting point.

Building a simple simulator

In this part, we will create a script that simulates the temperature sent by a specific device at a given timestamp. Below is a sample data that will be sent:

{
  ts: "2017-03-01T23:12:52Z",
  type: "temp",
  value: 34,
  sensor_id: 123
}

Let’s review the properties:

  • ts is the timestamp, when the temperature was measured, in ISO8601 format
  • sensor_id is used so we know which device the temperature came from
  • type is hardcoded to “temp” to specify a temperature, but other values can be used
  • value is the value of the temperature sent

Below is the script we will use for the simulator.

#!/bin/bash

# Default HOST:PORT targeted
HOST="localhost"
PORT=1337

function usage {
    echo "Usage: simulator.sh [-h HOST] [-p PORT]"
    exit 1
}

# Parse arguments
while getopts h:p: FLAG; do
  case $FLAG in
    h)
      HOST=$OPTARG
      ;;
    p)
      PORT=$OPTARG
      ;;
    \?)
      usage
      ;;
  esac
done

# Generate and send random data
while(true); do
    # Current date
    d=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

    # Random temperature between 20 and 34°C
    temp=$(( ( RANDOM % 15 )  + 20 ))

    # Send data to API
    curl -XPOST -H "Content-Type: application/json" -d '{"ts":"'$d'", "type": "temp", "value": '$temp', "sensor_id": 123 }' http://$HOST:$PORT/data

    sleep 1
Done

Basically, in each iteration of the loop, the current date is retrieved, a random number between 20 and 34°C is generated and sent to the host provided as a parameter. By default, data are sent to localhost on port 1337. If we need to send data to the host, www.example.com, on port 3000, we would run the simulator like so:

$ ./simulator.sh -h www.example.com -p 3000

Creation of the API

The API, developed in Node.js, will have the following characteristics:

  • implements a HTTP Post endpoint on /data
  • listens on port 1337, unless the PORT environment variable is provided
  • replies with a 201 HTTP Status Code (creation)
  • displays the received data on the standard output
  • contains a test to check the endpoint’s implementation

At this stage, the API does not persist data, but we’ll connect it with an InfluxDB database soon.

Basically, the API consists of 3 files:

  • package.json contains the application metadata and the list of dependencies
  • app.js contains the code defining the application web server
  • index.js is the entrypoint, that calls app.js.

Index.js

// Load dependencies
const util    = require('util'),
      winston = require('winston'),
      app     = require('./app');

// Define API port
let port   = process.env.PORT || 1337;

// Run API
app.listen(port, function(){
    winston.info(util.format("server listening on port %s", port));
});

App.js

// Load dependencies
const express    = require('express'),
      bodyParser = require('body-parser'),
      winston    = require('winston');

// Create express application
let app = module.exports = express();

// Body parser configuration
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

// Handle incoming data
app.post('/data',
         (req, res, next) => {
             winston.info(req.body);
             return res.sendStatus(201);
         });

For the Continuous Integration / Continuous Deployment part, we will define a simple test in the test/functional.js file. When running with npm test, it starts the API, sends a sample of data, and expects to get a 201 HTTP Status Code.

$ npm test

> iot@1.0.0 test /Users/luc/iot-api
> mocha test/functional.js


  Creation
info: server listening on port 3000
info:  ts=2017-03-11T15:00:53Z, type=temp, value=34, sensor_id=123
    ✓ should create dummy data (43ms)

Create a Docker image of the API

In order to package the API into a container, we will create a Dockerfile at the root of the application code. This Dockerfile contains all the instructions needed to install the application’s dependencies, from bottom to top: Linux libraries and binaries, Node.js runtime, npm modules, and application code.

FROM mhart/alpine-node:7.7.1

# Copy list of server side dependencies
COPY package.json /tmp/package.json

# Install dependencies
RUN cd /tmp && npm install

# Copy dependencies libraries
RUN mkdir /app && cp -a /tmp/node_modules /app/

# Copy src files
COPY . /app/

# Use /app working directory
WORKDIR /app

# Expose http port
EXPOSE 1337

# Run application
CMD ["npm", "start"]

To limit the size of the image that will be created, we will use mhart/alpine-node, which is a light Alpine Linux that embeds a Node.js runtime. This image is really popular — we can tell by the number of times it’s been downloaded.
Also, in order to make sure a simple change in the code will not rebuild the entire dependencies, we’ll run the npm install command first, and then copy the application source so it makes use of the cache. The API exposes the port 1337 and is ran with the common npm start command.

Let’s now create a Docker image of the API and tag it with 1.0.

 docker image build -t iot-api:1.0 .
Sending build context to Docker daemon 9.216 kB
Step 1/9 : FROM mhart/alpine-node:7.7.1
 ---> e1a533c514f2
Step 2/9 : ENV LAST_UPDATED 20170318T100000
 ---> Running in 28946c53b094
 ---> 1ab1b4b2fd77
Removing intermediate container 28946c53b094
Step 3/9 : COPY package.json /tmp/package.json
 ---> ff681a2fd62b
Removing intermediate container 749abfe9aae9
Step 4/9 : RUN cd /tmp && npm install
 ---> Running in 70ab8ad9ecc0
...
 ---> 514366651621
Removing intermediate container 70ab8ad9ecc0
Step 5/9 : RUN mkdir /app && cp -a /tmp/node_modules /app/
 ---> Running in 9a7d7a541edf
 ---> 2b9377530971
Removing intermediate container 9a7d7a541edf
Step 6/9 : COPY . /app/
 ---> ec5cf6e0fc95
Removing intermediate container b3587338bfd1
Step 7/9 : WORKDIR /app
 ---> e74dc2e2921d
Removing intermediate container 629a4eaa046d
Step 8/9 : EXPOSE 1337
 ---> Running in dfbfef3335eb
 ---> 9888e647a2ee
Removing intermediate container dfbfef3335eb
Step 9/9 : CMD npm start
 ---> Running in cf7dd0eaf15c
 ---> ce3bbc3098e8
Removing intermediate container cf7dd0eaf15c
Successfully built ce3bbc3098e

We now have an image of the API — it’s ready to be instantiated into a container with the following command:

$ docker container run -p 1337:1337 iot-api:1.0

In the current version, it does not have any utility as it only prints the data received. In the next steps, we will modify the code so it persists the data into an underlying InfluxDB database.

Adding InfluxDB to the picture

InfluxDB is a great time series oriented database, and, as we will see, it’s really easy to get started with. The InfluxDB documentation can help you understand the basic concepts very quickly.

Running InfluxDB inside a container

Docker Hub provides an official InfluxDB image. From the documentation, we can see that the simplest way to run a container based on InfluxDB is with the following command:

$ docker container run \
-p 8083:8083 -p 8086:8086 \
-e INFLUXDB_ADMIN_ENABLED=true \
influxdb

The port 8083 is used to access the administration interface and the port 8086 is used to expose InfluxDB HTTP endpoints. Because these ports are exposed (with the -p option), they are accessible directly from the localhost.

Also, he administration interface is not enabled by default — we need to provide the INFLUXDB_ADMIN_ENABLED environment variable for it to be accessible on port 8083.

local-InfluxDB-admin.png

From the administration interface, we can create the database we will use to persist the data.

At this stage, we can create a container for the API (using the iot-api:1.0 image) and another container for the underlying InfluxDB database. This is great, but the current API does not persist the data —let’s change that.

Modify the API to persist data into InfluxDB

We need to modify the app.js file so it uses the node-influx npm to connect to InfluxDB and write the data point.

Here’s the modified code:

// Load dependencies
const express    = require('express'),
      Influx     = require('influx'),
      bodyParser = require('body-parser'),
      winston    = require('winston');

// Create express application
let app = module.exports = express();

// Create a client towards InfluxDB
let influx = new Influx.InfluxDB({
   host: process.env.INFLUXDB_HOST || “db”
   database: process.env.INFLUXDB_DATABASE || “iot”
});

// Body parser configuration
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

// Handle incoming data
app.post('/data',
         function(req, res, next){
             influx.writePoints([
                 {
                  measurement: 'data',
                  tags: { type: req.body.type },
                  fields: { sensor_id: req.body.sensor_id, value: req.body.value },
                  timestamp: new Date(req.body.ts).getTime() * 1000000
                 }
             ]).then(() => {
               winston.info(req.body);
               return res.sendStatus(201);
             })
             .catch( err => {
               winston.error(err.message);
               return res.sendStatus(500);
             });
         });

There are two importants things to note here:

  1. We created an Influx.InfluxDB object and specified the host InfluxDB it will run on and the database it will use. These parameters are provided as environment variables and default respectively as “db” and “iot”. We will understand why the default value of host is “db” when we will use Docker Compose.

  2. The method to persist the data is writePoints. In the InfluxDB terminology, a point is a timestamped set of fields and tags that belong to a measurement. In a first approximation, a measurement can be seen as a table in a traditional SQL database. In our example, there is one tag (type ) that indicates the nature of the data sent. The fields are sensor_id (identifier of the device sending the data) and value.

Let’s build a new version of the API image to account for those changes.

$ docker image build -t iot-api:2.0

We now know how to create containers to run InfluxDB and the last version of the API. In order for them to work together in a clean way, we will use Docker Compose to define a multi-containers application.

Build a Docker Compose application

Docker Compose is a tool, written in Python, that allows applications to run as a list of services. It uses a docker-compose.yml file to define the application. In our simple example, the docker-compose.yml file will look like the following:

version: "3.1"
services:
  db:
    image: influxdb
    environment:
      - INFLUXDB_ADMIN_ENABLED:true
    ports:
      - 8083:8083
      - 8086:8086
  api:
    Image: iot-api:2.0
    ports:
      - 1337:1337

Two services are defined — both db and api publish the ports we defined above.

Since we’d like to have a neat interface to visualize the data, we will add another service to the picture before running the application,. We will use the Grafana platform, which enables us to build really fancy dashboards.

grafana-sample-dashboard.png

Of course, in our example, the dashboard will not look like the screenshot above, but you will find that adding graphs is really easy.

We can use the Grafana image from Docker Hub to create a new service, named dashboard, and add it to the above docker-compose.yml file.

version: "3.1"
services:
  db:
    image: influxdb
    environment:
      - INFLUXDB_ADMIN_ENABLED:true
    ports:
      - 8083:8083
      - 8086:8086
  api:
    image: iot-api:2.0
    ports:
      - 1337:1337
  dashboard:
    image: grafana/grafana
    ports:
      - 3000:3000

Grafana exposes the port 3000 that is mapped to the same port on the local machine. This means the Grafana interface will be available on http://localhost:3000 — we will see that soon.

Now, let’s run the whole application and see how the containers (which are only instances of each service) are able to communicate with each other. The Docker Compose application can be ran with the following command.

$ docker-compose up
Creating network "iotapi_default" with the default driver
Creating iotapi_api_1
Creating iotapi_db_1
Creating iotapi_dashboard_1
Attaching to iotapi_db_1, iotapi_api_1, iotapi_dashboard_1
…

A lot of things happened here —a network was created and a container was instantiated for each service and connected to this network. Being on the same network (user-defined bridge in our example), the container of a service can communicate with the container of another service just by using the service name. Sounds confusing? Let’s look at two examples:

  • A container instantiated from the api service can connect to the db only referencing the “db” string as the InfluxDB host. This is the reason why we defaulted the host to “db” in the API code.
// Create a client towards InfluxDB
let influx = new Influx.InfluxDB({
   host: process.env.INFLUXDB_HOST || “db”
   database: process.env.INFLUXDB_DATABASE || “iot”
});
  • The same happened for Grafana, we will see that when we define a datasource based on InfluxDB.

Create the database

Using the InfluxDB administration interface, accessible on localhost port 8083, we will create a database named iot.

local-InfluxDB-create-database.png

Run a simulator

Let’s start the simulator against localhost on port 1337 (default values)

$ ./simulator.sh

Create a Grafana dashboard

The Grafana web interface is available on port 3000, the default credentials are admin / admin. The first things we need to do is to create a data source. As InfluxDB is one of the many possible data sources Grafana knows, we just need to indicate a couple of parameters:

  • the URL of the InfluxDB host (note the “db” in the url in order to target the db service)
  • the name of the database (iot).

local-grafana-create-datasource.png

Once the data source has been added, we can create a dashboard. To make things simple, we will go with a graph and modify the default query so it matches the one on the screenshot below:

local-grafana-create-dashboard.png

When the dashboard is created, we can see the data from the simulator displayed.

local-grafana-data-samples.png

Distribute the image with Docker Hub

The image iot-api:2.0 is the last version we created and the one we will need to distribute so it can run in other environments. To do so, we will use Docker Hub, the official Docker online registry. Basically, a registry is a place where the images can be stored.

Once we have created an account on the Docker Hub, we will create a new repository, named iot-api. We will keep the default visibility as public so everybody will be able to use the images stored in this repository.

docker-hub-create-repo.png

Once the repository has been created, we are ready to push the iot-api:2.0 into it. This only requires the image to be tagged with the format expected by the Docker Hub: USERNAME/REPOSITORY:TAG. Let’s do it using the docker image tag command.

$ docker image tag iot-api:2.0 lucj/iot-api:2.0

Once the image is ready, we just need to login in the command line

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username (lucj):
Password:
Login Succeeded

And then push the image

$ docker push lucj/iot-api:2.0
The push refers to a repository [docker.io/lucj/iot-api]
2bf4fd3a3ace: Pushed
d462833d0039: Pushed
f1031b6334de: Pushed
2b458930d75d: Pushed
8e254b51dfd6: Layer already exists
60ab55d3379d: Layer already exists
2.0: digest: sha256:875971ec33256e551b743fb938d7933c52098b6b9348f5d35abc445c35dc9e06 size: 1577

If we check on the Docker Hub, we can see the image is now present.

docker-hub-check-version.png

We will now be able to deploy the whole application easily with Docker Compose on any machine.

Setup a CI/CD pipeline with Docker Cloud

When we created the API, we specified a test at the same time. In this part, we will setup the environment to automate the testing and deployment process. The pipeline we will put in place is the following one:

CICD-workflow.png

When changes are pushed to GitHub, tests are automatically triggered. If they run fine, a new version of the API image will be created and pushed to Docker Hub. It is then redeployed on the target environment through the Docker Cloud platform — we will use a DigitalOcean droplet in this example.

Setup the test for the Docker Compose application

Now that we have a Docker Compose application, we will refine the testing process a little bit and use a docker-compose.yml-like file to specify how the tests need to be run.

db:
  image: influxdb

sut:
  build: .
  command: /bin/sh -c 'curl -i -XPOST http://db:8086/query --data-urlencode "q=CREATE DATABASE iot" && npm test'
  links:
    - db

Two services are defined in this file:

  • db: the underlying InfluxDB
  • sut: service that builds the image and runs the database creation and the test command

Note: as curl is not present by default in the alpine image, we have added an instruction in the Dockerfile so it will be installed when we build the image. It’s not a big deal because curl is really light, and it will definitely help us create the database prior to running the tests.

...
# Install curl
RUN apk add -U curl

# Copy list of server side dependencies
COPY package.json /tmp/package.json
…

The tests can then be run with the following command

$ docker-compose -f compose-test.yml build sut
$ docker-compose -f compose-test.yml run sut
HTTP/1.1 200 OK
Connection: close
Content-Type: application/json
Request-Id: a661cdb3-0d59-11e7-8001-000000000000
X-Influxdb-Version: 1.2.1
Date: Mon, 20 Mar 2017 10:40:35 GMT
Transfer-Encoding: chunked

{"results":[{"statement_id":0}]}

> iot@1.0.0 test /app
> mocha test/functional.js

  Creation
info: server listening on port 3000
info:  ts=2017-03-11T15:00:53Z, type=temp, value=34, sensor_id=123
    ✓ should create dummy data (83ms)

  1 passing (99ms)

Basically, it runs the db first and then build and runs the sut service. When sut is ran, the underlying database named iot is created (issuing a curl against InfluxDB’s api), and the npm test command is ran. The API will be started and the test data will be sent.

We have ran the tests locally, it’s working, and that’s great! With the things we have put in place already, we will be able to automate the testing process.

Manage the application with Docker Cloud

Docker Cloud is the CaaS (Container as a Service) platform hosted by Docker. It allows you to manage containerized applications very easily:

  • setup the underlying infrastructure on several cloud providers
  • integrate with GitHub and BitBucket
  • setup whole CI/CD pipeline

Prerequisite

We first need to use the “Cloud Settings” menu to connect a source provider (our GitHub account in this case) and a cloud provider (DigitalOcean).

DockerCloud-cloud-settings.png

DigitalOcean is a great platform that allows you to spin up virtual machines easily and quickly. It’s obviously not free but a couple of dollars is enough for us to get started and really have some fun.

Creation of the underlying infrastructure

We will create a node that runs on DigitalOcean — this will be used to deploy the application.

CICD-DigitalOcean-create-droplet.png

We specified a couple of options, such as the region, the size of the VM, and a tag (iot), that will be used when we deploy the application later on.

Configuration of the repository

In the repository section, we need to select the iot-api repo we previously created in Docker Hub and configure it so that:

  • It runs tests on each push to GitHub (Autotest option)
  • It builds an image when the tests succeed (Autobuild option)
  • It redeploys the image (Autoredeploy option)

The Autotest and Autobuild options are defined at the repository level as shown in the following screenshot.

DockerCloud-repository-build-settings.png

By default, the test phase will run the service named sut that we defined in the compose-test.yml file.

As we will see next, the Autoredeploy option is specified at the service level.

Creation of the stack

Docker Cloud does not create an application directly from a docker-compose.yml file — it uses a slightly different file, which we named compose-cloud.yml. This file contains a limited set of options and is used to define our stack (group of services).

db:
  image: influxdb
  environment:
    - INFLUXDB_ADMIN_ENABLED=true
  ports:
    - 8086:8086
    - 8083:8083
  restart: on-failure
  tags:
iot
api:
  autoredeploy: true
  image: lucj/iot-api:latest
  command: npm start
  ports:
    - 1337:1337
  restart: on-failure
  tags:
iot
dashboard:
  image: grafana/grafana
  ports:
    - 3000:3000
  restart: on-failure
  tags:
iot

Notes:

  • We specified the iot tag so the application will be deployed on the node with this same tag. (Remember, we assigned the iot tag to the DigitalOcean machine we created.)
  • The autoredeploy flag is used, ensuring each new image built, after successful tests, is redeployed on our node.

We then copy this file in the Docker Cloud wizard and use it to create a stack..

DockerCloud-stack-creation.png

When the stack is created, we can see the 3 services running and the available endpoints that enable the application to be reached from the outside.

DockerCloud-stack-running.png

Configuration of the application

As we did before, we will create the iot database from the InfluxDB interface.

CICD-InfluxDB-create-database.png

And then configure Grafana, so it connects with InfluxDB, and creates a simple dashboard.

CICD-grafana-datasource-create.png

Obviously, there is no data point, as no simulator or real device is sending data to our backend.

CICD-grafana-empty-graph.png

Note: in the 3 previous screenshots, we have used the URLs that the stacks exposed as endpoints.

Test our pipeline

Let’s change something in the code of our application to see the different actions done in our pipeline.

  1. Changes pushed to Github

CICD-github-commit.png

  1. CI/CD pipeline triggered

CICD-docker-cloud-build-1.png

  1. Running the test (sut) and pushing the image to Docker Hub

CICD-docker-cloud-build-2.png

  1. New version on Docker Hub

Capture d’écran 2017-03-23 à 11.55.43.png

Note: we could also add a tag that matches GitHub’s commit hash. This is documented in the advanced options for autotest and autobuild features.

On Docker Cloud, the API service has automatically been redeployed with the new version of the image.

CICD-Docker-cloud-autoredeploy.png

Our application is deployed on a DigitalOcean node. Every change in the code is automatically tested and deployed if the tests are successful. That looks good.

  1. Running the simulator on our application

Using the endpoint provided by the API, let’s run the simulator.

./simulator.sh -h api.iot.065267a7.svc.dockerapp.io -p 1337

After a couple of seconds, we can verify that the dashboard has received several data points.

CICD-grafana-sample-data.png

Conclusion

This lengthy article shows the steps you can take to start building an IoT project from scratch using some helpful technologies. This project is obviously not production ready, but hopefully it can be used as a starting point for a demo or personal project.

What’s next?

The next article, dealing with the device side of the project, will illustrate how a Raspberry Pi (or other ARM device) can easily collect real-world data and send them to our backend.


Are you building an IoT project? Do you find post helpful? I’d really love to hear your feedback on all of this.

Discover and read more posts from Luc Juggery
get started
post commentsBe the first to share your opinion
Fewster
5 years ago

Helpful article! Thanks for sharing.

Another great source for your reference https://www.katalon.com/resources-center/blog/ci-cd-introduction/

Valentin
6 years ago

You’ve specified that this was not production ready. In addition to the device side missing, what are the main concepts needed to make it production ready?

treat better
7 years ago

great nice post , very helpful <a href=“http://blogseobettingonline.blogspot.com/2017/06/cendanagamingcom-agen-bola-agen.html”>Judi Bola</a>

Show more replies