Codementor Events

Jenkins as Code

Published Jan 27, 2019
Jenkins as Code

I needed a new Jenkins server for a side project and didn't want to worry too much about it becoming a work of art. Often, Jenkins servers that I come across have been online for 300+ days meaning they're not receiving important software updates and are potentially vulnerable. This is the result of it just continuing to work, not wanting to lose any configuration or build information, and because keeping Jenkins up to date can be a bit of a burden as new versions seem to come out every other week. You end up creating a snowflake server that you're afraid to lose.

You can counter-act that fear in the very beginning by making the server disposable. Jenkins build servers are often pets and should be thought of as cattle. If it's painful then do it more often, right?

I also wanted to explore a novel, light-weight way of chaining tools together when using Terraform. I did this for SSH and Ansible, but I don't see any reason that the same approach wouldn't work with any piece of software. I have no idea what to call it, but I'll use Terraform to create commands that can be run by a user after Terraform is successfully applied.

The project can be found on GitHub here: https://github.com/jonathanbaugh/terraform-ansible-jenkins

Tech Stack

The project is composed of a single Terraform module and two Ansible playbooks.

Terraform is in charge of creating the droplet, private keys, LetsEncrypt certificate, and validation. After Terraform runs it will provide the command that you need to run the Ansible playbooks. I haven't seen this approach to chaining Terraform and Ansible together so I think a little more explanation is in order.

TFX Commands

Terraform allows you to specify outputs which can then be used to link other modules or tools together. In the past, I've used the JSON formatted output, piped that to JQ to get the bits of information I need and then piped those values to a custom script. It would look something like this:

terraform output -json servers \
  | jq -r '.server_ips|.value|join(" ")' \
  | verify_deployment.sh

That can be a bit verbose and it's certainly not easy to remember - especially if you have a tricky JQ selector.

The new idea is simple. Feed Terraform all the information that it needs to feed Ansible and the compose Bash commands in the Terraform output.

They're quite easy to compose and execute. Below is an example of how this pattern would work and you will find working examples in output.tf (all run via tfx ssh, tfx ansible, and tfx sync_jobs)

variable "server_name" {}

output "ssh" {
  value = "ssh ${var.server_name}"
}

After running terraform apply we can then run the SSH command in Bash using $(terraform output ssh)

The tfx command, short for Terraform execute, adds a little syntactic sugar to make this easier and becomes simply tfx ssh.

Now, this is a simplified example but I hope you can see how it's extended in my project. Below is the Ansible playbook command for setting up the server. Look at all the arguments and dynamic input collapse into tfx ansible!

output "ansible" {
  sensitive = true
  value = "ansible-playbook -u root --private-key ${local_file.private.filename} -i ${digitalocean_droplet._.ipv4_address}, --extra-vars domain_name=${digitalocean_record._.fqdn} --extra-vars jenkins_admin_password=${var.admin_password} --extra-vars host_key_checking=False ansible/jenkins.yaml"
}

This will output something like:

$ terraform output ansible
ansible-playbook -u root --private-key /path/to/the/generated/key -i 123.123.123.123, --extra-vars domain_name=jenkins.example.com --extra-vars jenkins_admin_password=1234 --extra-vars host_key_checking=False ansible/jenkins.yaml

As a bonus, the command is not echo'd in the terminal when running via $(terraform output ansible)!

Developing Infrastructure as Code (IaC)

The whole project came together quickly thanks to the speed at which you can iterate with Terraform and Ansible. It's a wonderful way to build your infrastructure as the feedback loop is very tight compared to other tools and this way of working more closely resembles working with software that needs to be compiled and executed.

Circular references

Circular references with Terraform can be an issue. Sometimes you just need multiple things to pop into existence at the same time in order for Terraform to fully manage all of the aspects of your project.

One example of this which I came across was when trying to get the LetsEncrypt certificates working. I would have liked to place the certificates on the server using Terraform file provisioners however the certificate depended on a domain name and the domain name depended on the droplet.

Because of this circular reference, I was forced to place the certificate that Terraform generated on the file system and then place it on the server using Ansible.

Perhaps this is a cleaner way of placing the certificate anyway since there is a clear distinction between provisioning the infrastructure and provisioning the server itself.

Iterate quickly and destroy regularly

When developing your Terraform modules in this iterative way it's important to completely destroy the environment from time to time to verify it works from scratch. It is possible to get into a state where your infrastructure is difficult to reproduce in one pass due to the fact that Terraform can have some invisible dependencies.

By burning things to the ground every now and then you build confidence in your code and your project will become more resilient.

Final thoughts and next steps

This is not a complete solution but it is more of a starting point. There are many things that still need to be addressed to turn this into a really useful project including:

  • User management - I typically like to use the GitHub OAuth Plugin so that user management can be handled via GitHub.
  • Credential management - This is an area I need to look into further but I'm imagining that the GitHub provider will let me automate quite a lot of this process.
  • Setting up Jenkins slave pool which could allow me to increase or decrease capacity as needed without having to get a larger master server.
  • Shifting more configuration to Terraform variables.
  • Make it more cloud-agnostic - This is one of the best features of Terraform and I could make use of it by configuring which provider you'd like to use (dozens are supported).

Further Reading

If you're new to Terraform I very highly recommend following their Introduction and Getting Started Guide. They provide a very solid foundation for you to understand how to manage all of your infrastructure with Terraform.

This post from Aukjan got me thinking about different ways to make these tools work together without a lot of fuss. I definitely recommend his post titled Ansible inventory generated from Terraform.

Discover and read more posts from Jonathan Baugh
get started
post commentsBe the first to share your opinion
Show more replies