Deploying NextCloud on Kubernetes with Kustomize
Credit: Contributor/Pixabay
Deploying Nextcloud with Redis and MariaBD on Kubernetes is made easier with Kustomize. Find the full code on GitHub.
One of the simple pleasures of life is taking some old computers from around the basement and bringing them back to life by setting up one’s very own Kubernetes cluster. The humming of fans, unnecessary blinking server lights, and overly colorful graphs on a Grafana dashboard bring joy to one’s heart.
OK, I admit, it wasn’t that simple to set up Kubernetes knowing very little about it. But with the help of the internet and Rancher, my single node cluster is up and running on a Ubuntu server, proxied by an nginx wielding Raspberry Pi.
It was great watching Kubernetes at work. But once the novelty wore off, I found myself not knowing what to do with the cluster. So what’s next? How do I get more experience with Kubernetes? What other cool things I can do with it? Installing Nextcloud seemed like a good next step.
Enter Nextcloud, an open source cloud storage and general client-server solution that can be installed on a private server. Nextcloud is like having a personal private Dropbox with storage limited only by the size of the host hard disks. But more importantly, Nextcloud has a diverse architecture making it a great candidate for expanding one’s Kubernetes knowledge.
Nextcloud offers multiple installation options. The one I chose here comprises of the following components:
- Nextcloud web app (PHP + Apache)
- MariaDB for metadata storage
- Cron for repetitive maintenance tasks
- Redis for distributed caching
Additionally, the web app requires persistent storage where it will save content, Cron requires access to that same storage for maintenance, and Nextcloud needs to communicate with MariaDB and Redis.
Being a long time software developer, code structure and organization is always on my mind. Many coding projects start small, then tend to grow, sometimes exponentially. Without the right structure and tools, these projects become a nightmare to manage. To avoid this potential issue, I decided to go with Kustomize, an open source Kubernetes template customization solution.
Helmis another (more popular) templating option for Kubernetes. However, it requires some additional set up and a steeper learning curve due to its wider feature set, hence the decision to start with Kustomize and defer learning Helm for now
With Kustomize, manifests can be split by kind, then grouped into folders by component. With a root kustomization file, creating a deployable manifest becomes as simple as running one command:
kustomize build
In the examples below, most metadata and other extraneous details are omitted for brevity. These are available in the GitHub repo referenced at the end of the article.
Given dependencies among the components, I chose to start with the one with no dependencies: Redis. The deployment is rather simple: Use the redis:alpine Docker image for the container and expose port 6379.
redis/deployment.yaml
apiVersion: apps/v1
kind: Deployment ...
spec: ...
template: ...
spec:
containers:
- image: redis:alpine
name: redis
ports:
- containerPort: 6379
...
And expose it as a service to be used by Nextcloud web app.
Services enable container discovery without having to directly reference the container selector or ip address. Other containers will reference Redis by the service selector rather than the container selector.
redis/service.yaml
apiVersion: v1
kind: Service ...
spec:
ports:
- port: 6379
...
We also need a kustomization file to instruct Kustomize to piece together the manifests. This one is a simple listing of the yaml files as Kustomize resources:
kustomization.yaml are instruction files for Kustomize. Having one in each folder allows us to reference the folder itself as a resource in the root kustomization file
redis/kustomization.yaml
resources:
- deployment.yaml
- service.yaml
MariaDB is a bit more involved. The deployment starts with mariadb:latest with port 3306 exposed. Nextcloud requires a few command line arguments to be passed into the database as part of the startup command. We also need a persistent storage volume (Using a Persistent Volume Claim) and database credentials (from a K8s Secret). Here’s what that looks like:
mariadb/deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
spec:
containers:
- name: db
image: mariadb:latest
ports:
- containerPort: 3306
args:
- --transaction-isolation=READ-COMMITTED
- --binlog-format=ROW
- --max-connections=1000
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: MYSQL_DATABASE
name: db-secrets
...
volumeMounts:
- mountPath: /var/lib/mysql
name: db-persistent-storage
restartPolicy: Always
volumes:
- name: db-persistent-storage
persistentVolumeClaim:
claimName: db-pvc
Secrets are one way to store secrets in Kubernetes. It is not advised to check in the secret.yaml file into source control since the credentials in there are not encrypted. A better solution would be to use Bitnami’s Sealed Secrets.
mariadb/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-secrets
type: Opaque
data:
MYSQL_DATABASE: bmV4dGNsb3Vk #nextcloud
MYSQL_USER: bmV4dGNsb3Vk #nextcloud
MYSQL_PASSWORD: <any-base64-encoded-password>
MYSQL_ROOT_PASSWORD: <a-different-base64-encoded-password>
By default, Kubernetes containers get ephemeral storage (that is storage that goes away when the container is removed). In order to get persistent storage, a container can initiate a Persistent Volume Claim (or PVC) and let Kubernetes provide a volume through a storage handler (I use OpenEBS).
mariadb/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
and a service to make it easily reachable from the web app:
mariadb/service.yaml
apiVersion: v1
kind: Service
...
spec:
ports:
- port: 3306
selector:
component: db
We tie it all together with kustomization file
mariadb/kustomization.yaml
resources:
- secret.yaml
- pvc.yaml
- deployment.yaml
- service.yaml
Finally, with the dependencies ready, we can tackle the web app and it’s cron sidekick. The deployment uses nextcloud:apache image, references the db-secret map created earlier, and requires a persistent volume (a large one to fit content).
Nextcloud uses the database credentials in the Secrets map created earlier to login to MariaDB, create the nextcloud database, then access the database for normal operations
nextcloud/deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: nextcloud:apache
name: app
ports:
- containerPort: 80
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: MYSQL_DATABASE
name: db-secrets
...
volumeMounts:
- mountPath: /var/www/html
name: app-persistent-storage
restartPolicy: Always
volumes:
- name: app-persistent-storage
persistentVolumeClaim:
claimName: app-pvc
nextcloud/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
We also need a service to simplify discovery.
nextcloud/service.yaml
apiVersion: v1
kind: Service
...
spec:
ports:
- port: 80
selector:
component: app
Finally, being the entry point of our app, users need a URL of sorts to access it. For that, we create an Ingress that points to the nextcloud app service. This particular one is using cloud.example.com, so you should change it before proceeding.
An Ingress is a Kubernetes object that manages external access to the cluster. While Pods within the cluster can reference each other through internal IPs and Services, they are generally not accessible from networks external to Kubernetes without an Ingress entry. I use the default nginx ingress.
nextcloud/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: default
spec:
rules:
- host: cloud.example.com
http:
paths:
- backend:
serviceName: app
servicePort: 80
We then tie it all together with a simple kustomization file
nextcloud/kustomization.yaml
resources:
- pvc.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
The final component is cron. The container uses the same nextcloud:apache image and needs access to the persistent storage of the web app.
cron/deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: nextcloud:apache
name: cron
command:
- /cron.sh
volumeMounts:
- name: app-persistent-storage
mountPath: /var/www/html
restartPolicy: Always
volumes:
- name: app-persistent-storage
persistentVolumeClaim:
claimName: app-pvc
and a kustomization file
cron/kustomization.yaml
resources:
- deployment.yaml
Now that all the components are ready to go, how do we get Nextcloud to see mariadb and redis? it’s easy, kind of. Set the environment variables MYSQL_HOST and REDIS_HOST on the nextcloud container and the Docker image will detect those and configure the app automatically. But how do we set these values then? with vars of course!
We use Kustomize “vars” which allow us to set a variable to an object reference for later use in a the Kustomize build. In kustomization.yaml at the root of our project, we set two vars to the service containers for mariadb and redis:
The ”bases” attribute references the folders containing a kustomization.yaml file.
kustomization.yaml
namespace: nextcloud
namePrefix: nextcloud-
commonLabels:
app: nextcloud
version: "15"
bases:
- redis
- mariadb
- nextcloud
- cron
patchesStrategicMerge:
- patch.yaml
vars:
- name: DB_SERVICE
objref:
apiVersion: v1
kind: Service
name: db
- name: REDIS_SERVICE
objref:
apiVersion: v1
kind: Service
name: redis
We then use a Kustomize patch to set environment variables in the nextcloud app container to the values captured by the vars:
patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- name: app
env:
- name: MYSQL_HOST
value: $(DB_SERVICE)
- name: REDIS_HOST
value: $(REDIS_SERVICE)
And that’s it! We’re ready to deploy.
Thanks to Kustomize, deploying the 15+ Kubernetes is a breeze. At the root of the project folder, simply run:
kustomize build | kubectl apply -f -
Log into Rancher or use kubectl to check when the app is ready, then navigate to the ingress URL to load Nextcloud.
In order to be accessible from the internet, Nextcloud require an HTTPS connection. This set up assumes that TLS ending is taking place outside of Rancher and Kubernetes. In my set up, I use a separate computer (Raspberry Pi) with nginx and Let’s Encrypt to handle TLS, then forward plain traffic to the rancher back-end.
Want to try out the deployment on your own cluster? The full code is available on Github.