- Aug 10, 2021
- 9 min
Getting Started With Helm Chart
Magnolia in action
Take 12 minutes and a coffee break to discover how Magnolia can elevate your digital experience.
Kubernetes is becoming the standard orchestration system for automated deployment, scaling, and management of containerized applications. When working with Kubernetes you are likely to deploy objects, such as deployments, volumes, services, and config maps many times during an application’s lifecycle. Helm is a tool to standardize the deployment of applications and their packaging while providing flexibility and configurability.
In my previous article “Building a Continuous Delivery Pipeline with GitHub and Helm to deploy Magnolia to Kubernetes” we’ve explored deploying a containerized Magnolia application to a Kubernetes (K8S) cluster. In this article, we’ll look at Helm in more detail.
Helm describes itself as an “Application Package Manager for Kubernetes”, but it can do so much more than this description might suggest: Helm manages applications that run in a Kubernetes cluster and coordinates their download, installation, deployment, upgrade, and deletion.
In the world of Helm, Helm Charts define applications as a collection of Kubernetes resources using YAML configuration files and templates. But a chart does not only consist of metadata that describes the application, it also manages the infrastructure to operate the application in accordance with the Kubernetes primitives.
Once an instance of a chart is installed in the cluster, it’s called a ‘release’. One chart can be installed into the same cluster multiple times. Each time it is installed, a new release is created.
Installing and configuring Helm 3
To install Helm v3.x, run the following commands:
1$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 > get_helm.sh
2$ chmod 700 get_helm.sh
3$ ./get_helm.sh
To inspect what Helm can do, run helm --help.
To create a scaffold of a Helm chart including template files, run $ helm create my-first-chart.
Let’s now examine a real chart, the Magnolia Helm chart.
Magnolia Helm chart
I will reuse the Magnolia Helm chart from my previous article. It is located under the helm-chart/ directory of the magnolia-docker repository in GitHub and has the below structure:
1.
2├── Chart.yaml
3├── templates
4│ ├── configmap.yaml
5│ ├── _helpers.tpl
6│ ├── ingress.yaml
7│ ├── service.yaml
8│ ├── statefulset.yaml
9│ └── tests
10│ └── test-connection.yaml
11└── values.yaml
12
132 directories, 8 files
Chart.yaml defines the chart and values.yaml specifies the values to be used during its deployment.
Chart.yaml
1apiVersion: v2
2name: magnolia
3description: Deploy a Basic Magnolia CMS container
4
5type: application
6version: 0.1.0
7appVersion: 6.2.3
The first part includes apiVersion, a mandatory parameter that specifies the API version the chart is using, the name of the chart, and its description. The next section describes the chart type - by default an application or alternatively a library, the chart’s version, and appVersion, the application version, which you should increment as you make changes.
values.yaml
Template files fetch deployment information from values.yaml. To customize your Helm chart, you can either edit the existing file or create a new one.
1replicaCount: 1
2
3image:
4 repository: ghcr.io/magnolia-sre/magnolia-docker/magnolia-docker
5 pullPolicy: Always
6 tag: "latest"
7
8service:
9 name: http
10 port: 80
11 targetPort: 8080
12
13ingress:
14 enabled: true
15 annotations:
16 kubernetes.io/ingress.class: nginx
17 kubernetes.io/tls-acme: "true"
18 path: /
19 authorHost:
20github-magnolia-docker-author.experimentation.magnolia-cloud.com
21 publicHost: github-magnolia-docker-public.experimentation.magnolia-cloud.com
22 tls:
23 - secretName: github-magnolia-docker-author-tls
24 hosts:
25 - github-magnolia-docker-author.experimentation.magnolia-cloud.com
26 - secretName: github-magnolia-docker-public-tls
27 hosts:
28 - github-magnolia-docker-public.experimentation.magnolia-cloud.com
29resources:
30 limits:
31 memory: 1000Mi
32 requests:
33 cpu: 500m
34 memory: 1000Mi
35
36liveness:
37 httpGet:
38 path: /.rest/status
39 port: http
40 timeoutSeconds: 4
41 periodSeconds: 5
42 failureThreshold: 3
43 initialDelaySeconds: 90
44readiness:
45 httpGet:
46 path: /.rest/status
47 port: http
48 timeoutSeconds: 4
49 periodSeconds: 6
50 failureThreshold: 3
51 initialDelaySeconds: 90
52
53env:
54 author:
55 - name: JAVA_OPTS
56 value: >-
57 -Dmagnolia.bootstrap.authorInstance=true
58 -Dmagnolia.update.auto=true
59 -Dmagnolia.home=/opt/magnolia
60 public:
61 - name: JAVA_OPTS
62 value: >-
63 -Dmagnolia.bootstrap.authorInstance=false
64 -Dmagnolia.update.auto=true
65 -Dmagnolia.home=/opt/magnolia
The above file defines some important parameters for our deployments:
replicaCount: number of replicas for author and public pods
- the default is 1
image: container image repository and tag
service: source and target port of the exposed service
ingress: hostnames
- routing rules
- and TLS termination for the application
resources:limits and resources:requests: resources and resource limits for the application
liveness and readiness: probes that K8S uses to determine if the application is ready to accept requests or needs to be restarted
Custom configurations: other application-specific configurations
- for example
- JVM OPTS
Templates
The most important ingredient of a chart is the templates/ directory. It holds the application’s configuration files that will be deployed to the cluster. Magnolia’s templates/ directory contains configmap.yaml, ingress.yaml, service.yaml, and statefulset.yaml, as well as a test directory with a connection test for the application.
Building a Continuous Delivery Pipeline with GitHub and Helm to deploy Magnolia to Kubernetes
Do you want to set up a CD pipeline to deploy a containerized application to Kubernetes? Learn how to use GitHub and Helm using the Magnolia application as an example.
Workload
A workload is an application running in a Kubernetes cluster. Each workload is made up of a set of pods, where each pod is a set of containers.
In reverse: One or multiple containers make a pod; one or multiple pods make a workload.
Workloads can be exposed as separate services, while easily interacting with each other via cluster-internal DNS. They also have separate data persistence layers.
StatefulSet
StatefulSet and Deployment are controller objects in Kubernetes. While Deployment applies to stateless applications where all instances are interchangeable, StatefulSet instances are not interchangeable. StatefulSet also provides guarantees about the ordering and uniqueness of its pods.
A pod in StatefulSet has its own sticky identity. It is named using an index, for example, pod-0 and pod-1. Each pod can be addressed individually and keeps its name after a restart. It has its own persistent volumes and database layer, too.
StatefulSet suits Magnolia’s intercommunication model as each instance needs its own data persistence layer.
The below excerpt describes the typical structure of a StatefulSet including important parameters such as the replica count, container image and ports, and liveness and readiness probes:
1apiVersion: apps/v1
2kind: StatefulSet
3metadata:
4 name: {{ include "magnolia.fullname" . }}-public
5 labels:
6 {{- include "magnolia.labels" . | nindent 4 }}-public
7spec:
8 replicas: {{ .Values.replicaCount }}
9 selector:
10 matchLabels:
11 {{- include "magnolia.selectorLabels" . | nindent 6 }}-public
12 template:
13 metadata:
14 {{- with .Values.podAnnotations }}
15 annotations:
16 {{- toYaml . | nindent 8 }}
17 {{- end }}
18 labels:
19 {{- include "magnolia.selectorLabels" . | nindent 8 }}-public
20 spec:
21 containers:
22 - name: {{ .Chart.Name }}-public
23 image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
24 imagePullPolicy: {{ .Values.image.pullPolicy }}
25 env:
26 {{- toYaml .Values.env.public | nindent 12 }}
27 ports:
28 - name: http
29 containerPort: {{ .Values.service.targetPort }}
30 protocol: TCP
31 livenessProbe:
32 {{- toYaml .Values.liveness | nindent 12 }}
33 readinessProbe:
34 {{- toYaml .Values.readiness | nindent 12 }}
35 startupProbe:
36 {{- toYaml .Values.startupProbe | nindent 12 }}
service.yaml
Service configures network access to a set of pods from within and from outside the cluster. Unlike ephemeral pods, a service has a name and a unique IP address, clusterIP. Its IP never changes unless it is explicitly destroyed.
Below you find an example of the service representing the Magnolia Public pod. It defines the pod’s selector using a set of pod labels, as well as the port and protocol used between the service and the underlying pods:
1apiVersion: v1
2kind: Service
3metadata:
4 name: {{ include "magnolia.fullname" . }}-public
5 labels:
6 {{- include "magnolia.labels" . | nindent 4 }}-public
7spec:
8 ports:
9 - port: {{ .Values.service.port }}
10 targetPort: {{ .Values.service.targetPort }}
11 protocol: TCP
12 name: {{ .Values.service.name }}-public
13 selector:
14 {{- include "magnolia.selectorLabels" . | nindent 4 }}-public
ingress.yaml
ingress is an object that allows access to a Kubernetes service from outside the cluster. It defines and consolidates routing rules to manage external users' access to the service, typically via HTTPS/HTTP.
Note: In order to fulfill Ingress objects, you need an Ingress controller in your cluster, for example, NGINX Ingress Controller.
1apiVersion: networking.k8s.io/v1beta1
2kind: Ingress
3metadata:
4 name: {{ $fullName }}
5 labels:
6{{ include "magnolia.labels" . | indent 4 }}
7 {{- with .Values.ingress.annotations }}
8 annotations:
9 {{- toYaml . | nindent 4 }}
10 {{- end }}
11spec:
12{{- if .Values.ingress.tls }}
13 tls:
14 {{- range .Values.ingress.tls }}
15 - hosts:
16 {{- range .hosts }}
17 - {{ . | quote }}
18 {{- end }}
19 secretName: {{ .secretName }}
20 {{- end }}
21{{- end }}
22 rules:
23 - host: {{ .Values.ingress.authorHost }}
24 http:
25 paths:
26 - path: /
27 backend:
28 serviceName: {{ $fullName }}-author
29 servicePort: http-author
30 - host: {{ .Values.ingress.publicHost }}
31 http:
32 paths:
33 - path: /
34 backend:
35 serviceName: {{ $fullName }}-public
36 servicePort: http-public
The above ingress object includes some important attributes:
tls: TLS offloading
- uses certificates from Kubernetes secret
- each certificate can be associated with a list of hosts.
rules: traffic routing to the backend service
- each rule can be defined for a specific hostname
- list of paths
- and backend as a combination of a service and port.
configmap.yaml
ConfigMap allows you to decouple an environment-specific configuration from pods and containers. It stores data as key-value pairs that can be consumed in other places. For example, a config map can be referenced as an environment variable, or used as a pod volume that is mounted to containers.
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: {{ template "magnolia.fullname" . }}
5 labels:
6 app: {{ template "magnolia.name" . }}
7 chart: {{ template "magnolia.chart" . }}
8 release: {{ .Release.Name }}
9 heritage: {{ .Release.Service }}
10data:
11 magnolia-cloud.decorations.publishing-core.config.yaml: |-
12 receivers: !override
13 public0:
14 url: https://{{ .Values.ingress.publicHost }}
For example, this ConfigMap is referenced in statefulset.yaml as a pod volume:
1containers:
2 - name: {{ .Chart.Name }}-author
3 …
4 volumeMounts:
5 - name: magnolia-home
6 mountPath: /opt/magnolia
7 volumes:
8 - name: mounted-config
9 configMap:
10 name: {{ template "magnolia.fullname" . }}
In the container, the config file magnolia-cloud.decorations.publishing-core.config.yaml is mounted under the /opt/magnolia/ directory.
Named Templates
The Magnolia templates leverage named templates using a syntax like {{- include "magnolia.labels" }}. A named template is a Go template that is defined in a file and given a name. Once defined in _helpers.tpl. , named templates can be used in other templates, avoiding boilerplates and repeated code.
Chart Syntax
When developing a Helm chart I recommend you run it through the linter to ensure your templates are well-formed and follow best practices.
Run the helm lint command to see the linter in action:
1$ helm lint ./helm-chart/
2==> Linting ./helm-chart/
3[INFO] Chart.yaml: icon is recommended
4
51 chart(s) linted, 0 chart(s) failed
To verify that all templates are defined as expected, you can render chart templates locally and display the output using the template command:
1$ helm template ./helm-chart/
Deploy a Helm chart to the cluster
Now, let’s get our hands dirty with a deployment.
Creating a Kubernetes cluster
We will use a minikube cluster for our test deployment. You can refer to https://minikube.sigs.k8s.io/docs/start/ for installation instructions.
Once you installed minikube on your machine, you can start your cluster with a specific Kubernetes version:
1$ minikube start --kubernetes-version=1.16.0
2😄 minikube v1.6.2 on Darwin 10.14.5
3✨ Automatically selected the 'virtualbox' driver (alternates: [])
4🔥 Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
5🐳 Preparing Kubernetes v1.16.0 on Docker '19.03.5' ...
6💾 Downloading kubeadm v1.16.0
7💾 Downloading kubelet v1.16.0
8🚜 Pulling images ...
9🚀 Launching Kubernetes ...
10⌛ Waiting for cluster to come online ...
11🏄 Done! kubectl is now configured to use "minikube"
Installing the Magnolia chart
Install the Magnolia chart from your local Git repository and check the release:
1$ helm install test-mgnl-chart ./helm-chart/
2NAME: test-mgnl-chart
3LAST DEPLOYED: Fri Jan 15 16:56:42 2021
4NAMESPACE: default
5STATUS: deployed
6REVISION: 1
7
8$ helm list
9NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
10test-mgnl-chart default 1 2021-01-15 16:56:42.981924 +0700 +07 deployed magnolia-0.1.0 6.2.3
Access the application
Use the port-forward command to forward a local port to the service port of the Magnolia author instance, for example:
1$ kubectl port-forward svc/test-mgnl-chart-magnolia-author 8080:80
You can now access the Magnolia application at http://localhost:8080.
Next Steps
We’ve explored the basics of the Magnolia Helm chart and deployed the chart in a local cluster. There’s much more you can do from here, for example, making modifications to the chart templates, creating your own values.yaml file, configuring Ingress using an Ingress controller, and deploying the chart to your own cluster.