Codementor Events

Azure Service Brokers in Kubernetes

Published Mar 16, 2018Last updated Sep 12, 2018
Azure Service Brokers in Kubernetes

Forward

Do you want to use a Kubernetes cluster, but you're not ready to host your data inside it to begin with (maybe you're nervous about losing the data — very understandable)? In this article, I aim to convince you that you can enjoy using Kubernetes while relying on tried and tested products for data storage.

Warning

Service Brokers are still within Kubernetes' "incubator." This means that Kubernetes has not deemed Service Brokers as 100% ready for production. In practice, however, I've found Service Brokers running inside Azure work really well, but your mileage may vary! Always exercise best practices by testing your work in a staging environment before promoting to production and by learning how the system works in more detail than just what I’ve summarized in this article.

Let common sense prevail!

Context

Kubernetes is one of those projects that seems to eat up the responsibilites of so many other tools while adding extra value. If you haven't already had the chance to build a Kubernetes cluster and play around, I'd encourage you to do so before trying out this article, as it assumes you have a basic understanding of Kubernetes, Helm, and Azure Cloud.

Using Open Service Broker, we're going to harness an amazing property of Kubernetes that is fundamental to its design and allows for rapid development of new features: the ability for third parties to define custom resource definitions.

Open Service Broker is a project that "allows developers, ISVs, and SaaS vendors a single, simple, and elegant way to deliver services to applications running within cloud native platforms, such as: Cloud Foundry, OpenShift, and Kubernetes. The project includes individuals from Fujitsu, Google, IBM, Pivotal, RedHat, and SAP."

In plain English, Open Service Broker takes advantage of custom resource definitions to provide a generic way to define resources you want to spin up outside the Kubernetes cluster, but you can define the requirement inside Kubernetes. Cloud providers can then provide adapters that allow the user to map these Kubernetes resources to cloud resources that are managed by your cloud provider. In this article, we're going to use Azure as our cloud provider.

Doing things this way allows you to have more of your infrastructure configuration defined by your Kubernetes configuration, which means less context switching between different types of configurations. In this article, we're going to use Azure as our cloud provider.

Setup

The first thing we need is an Azure account. You can sign up at azure.com. Once you have an account, download, install, and login with the Azure CLI tool.

You'll also want to ensure kubectl, helm, jq, and svcat are also installed.

Spinning Up a Cluster

The second thing we need to do is to set up a Kubernetes cluster. We can do this easily thanks to Azure Kubernetes Service (AKS), with the az command:

# Enable Azure AKS in the Azure CLI
az provider register -n Microsoft.ContainerService

# Create an Azure ResourceGroup
az group create --name k8s

# Build an AKS cluster with kubernetes version 1.8.6
az aks create \
  --resource-group k8s \
  --name k8s \
  --node-count 3 \
  --kubernetes-version 1.8.6 \
  --generate-ssh-keys

It will take a while for the Kubernetes cluster to come up, as Azure has to go away and build quite a few components that make up your AKS cluster. You can watch this happen by running watch az aks list, which should contain information about the status of the AKS cluster. You can also run az aks show -n k8s -g k8s | grep k8s | awk '{print $5}' to see the current provisioning state.

Once the cluster is up and running, it's time to install the credentials onto your computer so you can interact with Kubernetes via kubectl, helm, et al. This is again very easy thanks to Azure's CLI:

az aks get-credentials \
  --resource-group k8s \
  --name k8s

Next, you should wait for the Kubernetes nodes to be available. You can run watch kubectl get nodes to watch for the nodes as they come up.

Setting up Helm

Once we have a Kubernetes cluster with available nodes, we can make our first deploy: we'll deploy Helm onto the cluster!

helm init

Helm will install its server and tiller, onto your cluster. Tiller is used by Helm to manage releases of your Helm charts.

Building a Service Principal for RBAC

RBAC, otherwise known as "Role Based Access Control," is a way for you to ensure isolation between your resources, kind of like a firewall. Azure and Kubernetes both use RBAC, but, in this context, we're talking about Azure's RBAC-powered Service Principals.

In Azure, a Service Principal is used to provide software access to Azure's resources (so we can spin resources up and down!).

# Build a RBAC-powered service principal
RBAC="$(az ad sp create-for-rbac -o json)"

# Get the Subscription ID for the Azure Account
AZURE_SUBSCRIPTION_ID="$(az account show --query id --out tsv)"

# Get the Tenant ID, Client ID, Client Secret and Service Principal Name from Azure.
AZURE_TENANT_ID="$(echo ${RBAC} | jq -r .tenant)"
AZURE_CLIENT_ID="$(echo ${RBAC} | jq -r .appId)"
AZURE_CLIENT_SECRET="$(echo ${RBAC} | jq -r .password)"
AZURE_SP_NAME="$(echo ${RBAC} | jq -r .name)"

Installing the Service Catalog

The service catalog provides an interface for cloud providers to work with. It's required so that we can then add a service broker (basically an adapter) that is capable of actually communicating with your Azure account and setting up the resources we define in our Kubernetes configuration.

Because we now have the service principal information, and we have Helm installed, we can now go ahead and deploy the Service Catalog to our cluster using Helm:

helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
helm install svc-cat/catalog \
  --name catalog \
  --namespace catalog \
  --set rbacEnable=false

This command adds the Service Catalog Helm repository, then installs the catalog using Helm. We turn off RBAC inside Kubernetes because AKS does not enable RBAC in its Kubernetes clusters by default (not to be confused with the Service Prinicpal above, which is an Azure component!).

We need to wait for the Service Catalog to come up. We can do this with watch kubectl get pods --namespace=catalog.

Installing the Azure Open Service Broker

Now that we have the Service Catalog, we can add Azure's service broker, using the IDs and secrets we acquired earlier when we built the service principal:

helm repo add azure https://kubernetescharts.blob.core.windows.net/azure
helm install azure/open-service-broker-azure \
  --name osba \
  --namespace osba \
  --set azure.subscriptionId=${AZURE_SUBSCRIPTION_ID} \
  --set azure.tenantId=${AZURE_TENANT_ID} \
  --set azure.clientId=${AZURE_CLIENT_ID} \
  --set azure.clientSecret=${AZURE_CLIENT_SECRET}

While Helm is busy installing the broker, we can use svcat to inspect the broker status:

watch "svcat get broker osba | grep osba | awk '{ print $3 }'"

Installing a resource

If you've made it this far, it means you've managed to set up everything you need in order to start spinning up Azure resources from inside Kubernetes.

Let's give it a try! Since we've been using Helm, and you're likely to want to use Helm for your deployment configuration anyway, let's spin up the resources using Helm.

In this tutorial, I'm going to be spinning up a Postgres server. In order to do this, we'll need two components: a Postgres instance, and a Postgres binding.

The instance defines the type of Postgres instance to spin up, as well as handy extras, such as which postgres extensions to install at the same time. The binding is used to provide the connection information for other Kubernetes resources to consume.

First, let's initialize a new Helm project:

helm create myapp # Build a new helm project

We'll create a new file inside the newly created myapp directory called templates/service-broker/postgres-instance.yaml:

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
  name: myapp-postgres
  namespace: myapp                                    # The namespace the instance is exposed within
spec:
  clusterServiceClassExternalName: azure-postgresqldb # The type of resource to provision (use 'svcat get classes' to find the one you want)
  clusterServicePlanExternalName: basic50             # The size/scale of the resource    (use 'svcat get plans' to find the one you want)
  parameters:
    location: westeurope                              # Which datacenter to use
    resourceGroup: k8s                                # Which resource group to put the resource in
    extensions:                                       # Which postgres extensions to install before booting
      - uuid-ossp
      - postgis

And we'll create a matching file for the binding called templates/service-broker/postgres-binding.yaml:

apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
  name: myapp-postgres-binding
  namespace: myapp            # The namespace the binding is exposed within
spec:
  instanceRef:
    name: myapp-postgres      # The name of the ServiceInstance to bind to
  secretName: myapp-postgres  # The name of the Secret which contains the connection info

Finally, run helm install, and the Azure Open Service Broker will start building the resource for you. You can use svcat to get the status of the instance and bindings:

watch 'svcat get bindings -n myapp && svcat get instances -n myapp'

Connecting to the resource

When both the instance and binding are ready, you will then be able to connect to the instance as if the instance was inside Kubernetes! Let's give that a go, shall we?

First, let's get the contents of the Secret:

kubectl get secret myapp-postgres -o yaml

You will see something like this:

apiVersion: v1
data:
  database: eHF5MjdsM3pucQ==
  host: ZXhhbXBsZS5wb3N0Z3Jlcy5kYXRhYmFzZS5henVyZS5jb20=
  password: b0dFS3NSQTRTdzdXY2gw
  port: NTQzMg==
  username: YWJjZGVmZ2hpMUBhYjAxMjM0NS02NzhhLTFhMmItMTIzYS0xMjM0YWI1Y2Q2ZWY=
kind: Secret
metadata:
  creationTimestamp: 2018-03-15T20:21:42Z
  name: myapp-postgres
  namespace: production
  ownerReferences:
  - apiVersion: servicecatalog.k8s.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ServiceBinding
    name: myapp-postgres-binding
    uid: 4cca876b-0c43-11e8-89f1-0a580af40202
  resourceVersion: "933662"
  selfLink: /api/v1/namespaces/myapp/secrets/myapp-postgres
  uid: 82b381b8-0c44-11e8-9582-0a58ac1f18d9
type: Opaque

The value of each piece of data is base64 encoded. You can get the real username, password, and host by base64-decoding the values.

Let's use a PSQL docker image to test the connection:

kubectl delete pod psql # Delete the psql pod if it already exists
kubectl run -it psql --image=governmentpaas/psql --restart=Never -- psql -h <host> -p <port> -U <username> <database>

When you see the text, "If you don't see a command prompt, try pressing enter," paste in the password for the database. If everything worked out okay, you should now be connected. Once you're done, don't forget to delete the pod again with kubectl delete pod psql.

Conclusion

Kubernetes is already a very powerful tool to enable to you to build scalable, maintainable infrastructure like never before. Open Service Brokers make it easy for you to build a Kubernetes configuration that describes the infrastructure that exists outside of the cluster.

  1. The Service Catalog GitHub page has some very good documentation inside it
  2. Azure has a tutorial that shows how to use Open Service Broker to get a WordPress site up and running
  3. The Open Service Broker API website provides more high-level information about how it all works under the hood
  4. Lachlan Evenson has a great series of tutorials on Kubernetes, including Service Broker
Discover and read more posts from John Hamelink
get started
post commentsBe the first to share your opinion
Show more replies