Codementor Events

My Open Mainframe Project Internship Experience

Published Aug 16, 2019
My Open Mainframe Project Internship Experience

Open Mainframe Project Internship program is a remote job for students to contribute to the Open Mainframe Project.

The Open Mainframe Project is intended to serve as a focal point for deployment and use of Linux and Open Source in a mainframe computing environment. The Project intends to increase collaboration across the mainframe community and to develop shared tool sets and resources. Furthermore, the Project seeks to involve the participation of academic institutions to assist in teaching and educating the mainframe engineers and developers of tomorrow.

The aim of my project was to build docker images for s390x based linux distros of development stack like MEAN and dockerize various modern frameworks & languages. The second part of the project was automate the build of all of the docker images, the entire clefOS library, one of the official dockerhub images, using Jenkins CI.

I started trying to apply for this program from the very beginning of the year and I was extremely exhilirated when I finally got selected.

These past few weeks were full of excitement and learning for me. The goal of my project was to make docker images for s390x architecture in SUSE Linux Enterprise Server 15 and automate the scheduled build process for clefOS images. The first half of the project was to make the images. We initially planned to build the images for openSUSE. But openSUSE had not been kept current with s390x. That forced us to switch to SLES15 instead.

My first challenge was to build a base image of SLES15. The base image will then get used by all of the other images. This was a challenge for me as I had never built a base image before. To accomplish this, Neale, my mentor, provided me an instance of SLES15 linux guest. I accessed the vm and wrote a bash script that will create a chroot environment, add repos to it, install essential packages and finally make a tarball. This tarball will then get utilized by our Dockerfile which builds the base image using FROM scratch.

After achieving this goal, I moved on to building more images for SLES 15. Then came MEAN stack. The tech stack of Mongodb, Express, Angular and Nodejs is very popular and we wanted it to be a docker image for s390x base SLES15. But we couldn’t find any official mongodb package for SLES15. I tried using mongodb for SLES12, but it didn’t work as it was unable to find a package libcrypto even though it was installed. We finally decided that we will wait for an official release of mongodb for SLES15 before building an image for mongodb and mean stack.

The next phase of the project was building a system to automate the process of building the images so that the images can remain up to date. I achieved it by building a pipeline in Jenkins. The pipeline is scheduled to run once every month automatically every whenever a commit is made to the codebase or. The pipeline executes the appropriate commands to build all of the docker images. This feature allows us to evaluate if a change is breaking the image and pipeline is failing, thus adding CI/CD support to the repository. I am building a similar pipeline for all of the images of clefOS repository as well which will keep the images of clefOS up to date.

Building the pipeline was a challenging part in itself. Jenkinsfile can be used to run bash commands and if the Jenkins server has access to docker daemon building images is quite straightforward. But the problem is you cannot push those images. For that you will need to do docker login and provide credentials which is quite insecure. Here is where docker build plugin comes to the rescue. In a scripted Jenkinsfile this allows you to build a docker image by simply writing:-

app = docker.build("repo/name”, "./relative/path/to/Dockerfile")

This allows us to use the app and push the images to dockerhub by utilizing the credentials plugin of jenkins. But this plugin had a major problem. Building over a 100 images using this will take a lot of code in Jenkinsfile. This is the only working solution that I have found which builds the docker images and allows us to push them too from Jenkins itself.

I must say this has been a very challenging as well as rewarding project. Each day, I learn something new and I master it once I implement it a bunch of times. By working on s390x architecture based VMs, I have learnt how does a s390x system works. I am looking forward to working and finishing this project successfully.

docker-jenkins.png

In the second phase of my internship at Open Mainframe Project, I automated the build of the ClefOS library of images and also my docker images of SLES15 for s390x. To achieve this, I used Jenkins CI and a little bit of bash scripting.

Setting up the pipeline was the major challenge of my internship. I used Jenkins CI and linked it to the github repository containing all of the source code dockerfiles using a webhook. This webhook sends a POST request to the specified Jenkins server whenever a new commit is pushed. The Jenkins server is hosted on a s390x ClefOS virtual machine on the LinuxONE Community Could.. The Jenkins server is always ready to receive this POST request.The payload of this request contains details like which repo and which commit triggered the webhook. After parsing the request, Jenkins triggers my pipelines. One pipeline is for ClefOS and the second is for my s390x SLES15 images. The ClefOS pipeline runs on master node and the SLES15 pipeline runs on a slave node. The pipelines get triggered whenever a new commit (or commits) is/are pushed into master branch of the repo. A single repository contains source code of both ClefOS and SLES15 images https://github.com/openmainframeproject-internship/DockerHub-Development-Stacks

The pipelines pull the Jenkinsfile from the source code itself. There were just too many ClefOS images such that instructing Jenkins to build each image in Jenkinsfile was getting cumbersome. Instead, each image was assigned a Makefile. This Makefile contained commands to build all of the images from the source code folder, push them and clean the system. So the initial problem was if we wanted to push the image we had to build it using docker.build() and assign it to a variable for which we can use to call the push() method. The Docker plugin also supports an image method which we can use to trace docker images that have been built before. So I used these two factors to my advantage and wrote a bash script which iterates over all of the folders and executes make all in all of them. This script builds all of the images present in ClefOS folder. After that we simply use docker.image() to assign the images to a variable on which we can all the push method. To remove the images, I am using a similar bash script which runs make clean command of Makefiles.

This solution has some advantages and some flaws too. The major advantage is that all the images get build with a single command in Jenkinsfile. But removes the flexibility of building individual images. To solve this issue I have also coded the part to build individual images using docker.build(). This code is present in Jenkinsfile as comments. If someone wants to check the build of a single image in the CI they can comment out the execution of build all images bash script and uncomment the code for building an image.

There were also some images which were using hardcoded versions in their source code. I changed that to use the latest available version by scraping a .yml file using a bash script.

The pipelines are scheduled to run on the 22nd of every month. This ensures that images are on the latest version and developers can use them right away and don’t have to deal with outdated images.

Discover and read more posts from Vedarth Sharma
get started
post commentsBe the first to share your opinion
Show more replies