Hugo, Docker, Kubernetes and Gitlab CI - How this blog is made

… or how I ditched PHP and WordPress and started enjoying blogging again.

Besides being an IT pro, I am also a geek. I love to try out new stuff, learn something new better every day than not, and exploring new technologies.
Tired of the always having to watch my back when I was running a Blog in Wordpress, getting hacked countless times because of its security flaws, or just having no time to update, I bring you the super-duper-automatic deploy on Docker and / or Kubernetes Hugo based, on Gitlab CE, Docker and Kubernetes. Ignore what I wrote about Kubernetes in a previous article, after I have spent some weeks learning it and a couple more months improving my skills on it, I am actually impressed by its performance, flexibility and how easy it is to use once you´ve learned the basics.

So here comes my all new recipe for a super-duper-automated Hugo, Gitlab (with CI), Docker and Kubernetes powered static websites.

List of ingredients:

 - One text editor of choice (more convenient with a Hugo plugin, I use VS Code).
 - A blank Gitlab repository with Gitlab CI enabled.
 - A Gitlab Docker runner.
 - A Kubernetes cluster on which to deploy your site.
 - The ~/.kube/config file allowing you to run kubectl commands on said cluster.


Hugo is a static website generator that is very easy to learn It has looots of freely available themes, is quick to learn as the pages are written in Markdown and generates your pages lightning fast. Explaining it is out of the scope of this article, so take a look at their website.

Gitlab (with CI)

I run my own instance of Gitlab, as I have plenty of space on my server. I use my own instance because I don´t want my ~/.kube/config on someone else’s system.

Special considerations for gitlab-runner

You will need a Gitlab Runner (that is the agent that runs your CI jobs and can be on the same machine as gitlab is running, another machine or wven your laptop) of type Docker. The machine where the agent runs will need to haver Docker installed and running. After creating a default configuration for the runner using the gitlab-runner register command you will need to edit some things in /etc/gitlab-runner/config.toml as follows:

concurrent = <how many CI jobs you want to run at the same time>
check_interval = 0

  name = "<your runner name here>"
  url = "<your gitlab URL>"
  token = "<your gitlab runner token>"
  executor = "docker"
    tls_verify = false
    image = "ruby:2.1"
    privileged = true
    disable_cache = false
    volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock", "/etc/default/docker:/etc/default/docker", "/etc/docker/daemon.json:/etc/docker/daemon.json"]
    shm_size = 0

After that, restart gitlab-runner: sudo systemctl gitlab-runner restart

Also, ensure that the user running the gitlab-runner process is a member of the group docker or your CI jobe will not start.

Just to be sure gitlab-runner picks up the proper permissions and groups, either restart it again, or restart the machine where it’s running.

Start a Hugo website

Next step is setting up a Hugo site. For that purpose:

  • Install Hugo locally on your workstation. You will only need that for the initial setup of the structure of the website or blog and some initial tweaking of the theme you have chosen.
  • When installing a theme, don’t clone the git repo into your Hugo folder structure, just download it as a .zip file, extract it and put the content in the ./themes folder.

When you are happy with how your website looks and with the initial content, go ahead and push it all to git. Gitlab will provide all the relevant information on the main project page of an empty repo.

Setup .gitlab-ci

We need to provide the Gitlab continuous integration with instructions on what to do when new code is committed to our repo. These instructions will be in a file called .gitlab-ci.yml in the root of our repository.

Build the Hugo /public directory

The first thing in our setup will be to run hugo to create the static content of our website. Here are the relevant lines in out .gitlab-ci.yaml:

  - build

  stage: build
  image: dinofizz/hugo
  - rm -rf public
  - hugo -d public -b "<your website URL>"
    - public
    - public
    expire_in: 2 weeks

Let’s analyze what this file does:

  • We have a definition of the stages of our CI/CD process, for starters there is only one stage: build
  • Then we declare our build stage

  • image: dinofizz/hugo: This specifies the image (from Docker Hub) that will be used in the subsequent steps. In this case, the image is based on the very light alpine linux and also contains the hugo binary. More information about this image can be found here

  • script: defines the steps that will be run in this image. In this case the commands are:

    • - rm -rf public: will clean out the /public dierctory, which we shoud NOT have commited to our git repo in the previous step just to make sure hugo finds a clean build environment

    • hugo -d public -b "<your website URL>" will run hugo. -d public indicates the target directory where the static site will be built and -b "<your website URL>" will tell Hugo the URL of your website to build some of its links.

  • cache: tells CI which paths it should keep in place for subsequent CI steps. In this case it’s only the public directory, which is where our static content for our website will be living.

  • artifacts: also tells the CI process which parts of our working directory should be uploaded to gilab as an artifact. I also provide a maximum age with the expire_in: 2 weeks so that everything is cleaned up after this time. This will prevent our repository from becoming cluttered with old builds.

Now we can run a first pipeline building in our Gitlab project. for this select CI / CD -> Pipelines from the Gitlab side menu:


and then click the “run pipeline” button

Run Pipeline

and confirm that you want to tun the pipeline in the following dialog. Running it against the “master” branch is OK for now

Confirm Pipeline

After this, Gitlab CI will start its magic… You can follow its tracks by going to the currently running pipeline that will show in the CI /CD menu under pipelines and then clicking on the “build” step in the current pipeline. It will show something similar to this:

Hugo Build

What exactly has happened here? Let´s analyze the output step by step:

Running with gitlab-runner 10.4.0 (857480b6)
  on conxtor-docker (041c64fd)
Using Docker executor with image dinofizz/hugo ...
Using docker image sha256:c74ed0802c8988ef5c718a49abbfaec747e364c980ea91aa6ac793c4033eb4fc for predefined container...
Pulling docker image dinofizz/hugo ...
Using docker image dinofizz/hugo ID=sha256:03a2ae6b65e530d0a4c59cbb82057464bed245d60653ac83d37007dc6f73c0c4 for build container...
Running on runner-041c64fd-project-4-concurrent-0 via gitlab-runner...

Here, the gitlab runner has downloaded and started a docker image called dinofizz/hugo. This is the image we have specified above and contains the hugo executable. It´s a quite lightweight image, based on alpine linux (a very, vary stripped down version of linux) and the hugo binary added on top. This image is from Docker Hub which is a vast repository of docker images for almost all purposes. You will probably find the image you need here, probably also for any other static website generator around nowadays.

Fetching changes...
Removing public/
HEAD is now at 78947dd Darn typos
   78947dd..7e5a0ad  master     -> origin/master
Checking out 7e5a0ad6 as master...
Skipping Git submodules setup
Checking cache for default...

Gitlab CI /CD has just cloned / pulled the contents of our repo in the current working directory of the docker image. This is normally /cache (it´s defined above in the gitlab-runner.toml as a volume).

$ rm -rf public
$ hugo -d public -b "<Your Website URI>"
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
13 regular pages created
62 other pages created
11 non-page files copied
29 paginator pages created
16 tags created
10 categories created
total in 523 ms
Creating cache default...
public: found 228 matching files

This is the actual run of hugo to generate your static content in the ./public directory. If you remember, in our .gitlab-ci.yml we have a section:

    - public

which is relevant for the next step, creating the actual artifact:

Created cache
Uploading artifacts...
public: found 228 matching files                   
Uploading artifacts to coordinator... ok            id=231 responseStatus=201 Created token=3FSTDd6B

which will create what is called an artifact, which is nothing more or less than a .zip file containing the /public directory. At last, we see

Job succeeded

just to reassure us that our static content has been built sucessfully and is now available for the next steps we will need.

Build a Docker image to serve our static website:

The next step is to create a Docker Image containing a web server and our generated static website. I have long been favouring Nginx as my web server of choice, and we have an excellent and lightweight image right at Dockerhub:

We can follow the instructions to use it that are provided almost verbatim. Our Dockerfile will look as follows:

FROM nginx
COPY public /usr/share/nginx/html

which means nothing else but that we are taking our previously generated artifact, the web content in the ./public directory and copying it into the default document root of this image.

Don’t forget to activate the registry in your Gitlab project before the next step:

Add the following at the bottom of your .gitlab-ci.yml:

  stage: buildimage
  image: docker:dind
  - echo "$CI_REGISTRY_PASSWORD" | docker login -u gitlab-ci-token "$CI_REGISTRY" --password-stdin
  - docker build -t $REGISTRY_PATH:$CI_PIPELINE_ID .
  - docker build -t $REGISTRY_PATH:latest .  
  - docker push $REGISTRY_PATH:latest

This will do the following:

  • Tell the Gitlab Docker Runner to use (and pull, if needed) a special image: dindwhich stands for Docker-in-Docker and is nothing else than a Docker Image running a Docker daemon inside itself and which has the Docker toolstack (including docker-compose) included.
  • Log in to our Gitlab registry. We need two parameters here, one is the hostname of our registry, which $CI_REGISTRY and the other one is an access token, $CI_REGISTRY_PASSWORD. Both values are automatically set by Gitlab CI and passed into the context of our runner, so we do not need to worry about setting them.
  • Build the image as specified in our Dockerfile for the registry identified with $REGISTRY_PATH (more on this variable in a moment!) with a tag of $CI_PIPELINE_ID which is another special variable that is automatically provided by Gitlab, and is nothing else than the (global) number of the current CI pipelnine, which is autoincremented automatically.
  • Push the resulting image to the registry
  • Repeat the above steps, but this time tagging the image as latest which can sometimes be useful.

Also change the stages: section at the top to look as follows:

  - build
  - buildimage

to include our newly created stage.

and immediately below add a section for some variables:

  REGISTRY_HOST: <your Gitlab Host>
  REPO: <path to your Gitlab project>

You will need to provide the appropriate values for thise two, the first one is self-explanatory, $REGISTRY_PATH will normally be the full path to the project registry, i. e. <group>/<repository>.

Commmit the Dockerfile and the changes to .gitlab-ci to the repo and watch the magic happening:


The log of the buildimagestep will look similar to this:

Running with gitlab-runner 10.4.0 (857480b6)
  on conxtor-docker (041c64fd)
Using Docker executor with image docker:dind ...
Using docker image sha256:c74ed0802c8988ef5c718a49abbfaec747e364c980ea91aa6ac793c4033eb4fc for predefined container...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:6e143f61a0dd3b518a623e40aae9180f7a0d8721ba05da88097ea4a25e048194 for build container...
Running on runner-041c64fd-project-4-concurrent-0 via gitlab-runner...
Fetching changes...
Removing public/
HEAD is now at c05562f Test pipeline only with build - fix runner tags
Checking out c05562f7 as master...
Skipping Git submodules setup
Downloading artifacts for build (67)...
Downloading artifacts from coordinator... ok        id=67 responseStatus=200 OK token=RXuisN_s
$ echo "$CI_REGISTRY_PASSWORD" | docker login -u gitlab-ci-token "$CI_REGISTRY" --password-stdin
Login Succeeded
$ docker build -t $REGISTRY_PATH:$FULL_VERSION .
Sending build context to Docker daemon  17.53MB

Step 1/2 : FROM nginx
 ---> 3f8a4339aadd
Step 2/2 : COPY public /usr/share/nginx/html
 ---> Using cache
 ---> d0dd104bc231
Successfully built d0dd104bc231
Successfully tagged
$ docker build -t $REGISTRY_PATH:latest .
Sending build context to Docker daemon  17.53MB

Step 1/2 : FROM nginx
 ---> 3f8a4339aadd
Step 2/2 : COPY public /usr/share/nginx/html
 ---> Using cache
 ---> d0dd104bc231
Successfully built d0dd104bc231
Successfully tagged
The push refers to repository []
b7769cfae70c: Preparing
a103d141fc98: Preparing
73e2bd445514: Preparing
2ec5c0a4cb57: Preparing
a103d141fc98: Pushed
b7769cfae70c: Pushed
73e2bd445514: Pushed
2ec5c0a4cb57: Pushed
1.50: digest: sha256:bb6aecf669bedab47a1e8c9ec84db6896e76d174b32cf1ef54fa04154fc14540 size: 1159
$ docker push $REGISTRY_PATH:latest
The push refers to repository []
b7769cfae70c: Preparing
a103d141fc98: Preparing
73e2bd445514: Preparing
2ec5c0a4cb57: Preparing
73e2bd445514: Layer already exists
a103d141fc98: Layer already exists
2ec5c0a4cb57: Layer already exists
b7769cfae70c: Layer already exists
latest: digest: sha256:bb6aecf669bedab47a1e8c9ec84db6896e76d174b32cf1ef54fa04154fc14540 size: 1159
Job succeeded

How to deploy to Kubernetes from Gitlab-CI?

The next step would be to deploy our Docker image to Kubernetes, be it by means of a pod, a deployment, a replica set or a daemons set, whichever we prefer and somehow expose it via a host port or preferrably an ingress. But Gitlab-CI doesn´t know how to talk to Kubernetes yet. (A functionality named Auto DevOps is being built, but it´s not production-ready yet, but I may adapt this article some day to update it).

We basically have two options to access Kubernetes (using kubectl) from our runner:

  • Install kubectl locally on the same machine as our runner, provide a ~/.kube/config file to it and then call it from a runner of type “shell”. This would require us to create a new runner of that type on the runner machine, set tags in the CI pipeline, and some more things. For that reason, I favoured the second solution using the docker runner we already have in place
  • Create a docker image that contains kubectl and the configuration for accessing our Kunernetes cluster. This solution is outlined below:

So, here is the recipe:

  • Create a new Git repository with registry enabled and call it, for instance, kubectl-<my-k8s-cluster-name>
  • Protect this repository well, it contains the credentials to get full access to the Kubernetes Cluster. You don’t want this repository to be publically accessible or visible. Never ever!
  • We need the following files in there:

    • A Dockerfile: FROM bash:4 RUN apk --no-cache add gettext ca-certificates openssl \ && wget -O /usr/local/bin/dumb-init \ && wget -O /usr/local/bin/kubectl \ && chmod a+x /usr/local/bin/kubectl /usr/local/bin/dumb-init \ && mkdir /root/.kube COPY config /root/.kube/config ENTRYPOINT ["/usr/local/bin/dumb-init","--","/usr/local/bin/"] CMD ["bash"] What we do here, is, using the very, very small bash:4 image from Dockerhub (based on the minimal Alpine linux) we install the default certificate bundle, a very dumb init that doesn´t do much except providing an entrypoint to our image and kubectl, copies our ~/.kube/config file to the location where kubectl will be able to find it, and creates a docker image from it all. The image will be very small, it´s only 34 MB and since you will not be changing your kubernetes credentials all that often, it will not need to be rebuilt that often.
    • our ~/.kube/config file, of course in the usual format: ``` apiVersion: v1 clusters:
    • cluster: certificate-authority-data: server: name: contexts:
    • context: cluster: user: admin name: current-context: kind: Config preferences: {} users:
    • name: admin user: client-certificate-data: client-key-data: token: `` It's **very** important that in the lineserver: ` you use a URI for the Kubernetes API that the runner is able to access. If you expose the API on the internet (although maybe you should think twice before doing this), use the external URI with which lyou expose the K8s API, if not, you’ll probably have to make sure the Gitlab-runner and the K8s API are on the same internal network and can talk to each other.

    • A .gitlab-ci.yml file to make it all happen: ``` stages:

    • buildimage


    buildimage: stage: buildimage image: docker:dind script: - echo “$CI_REGISTRY_PASSWORD” | docker login -u gitlab-ci-token “$CI_REGISTRY” –password-stdin - docker build -t $REGISTRY_PATH:$VERSION . - docker build -t $REGISTRY_PATH:latest . - docker push $REGISTRY_PATH:$VERSION - docker push $REGISTRY_PATH:latest ```

  • Commit, youur pipeline will launch and put the image in the registry, and we are ready to…

Deploy our Hugo-powered blog to Kubernetes (finally!):

The moment you’ve been yearning for is about to arrive. We now have all the tools at hand in our pipeline to deploy our Docker image to our Kubernetes cluster. But there are some prerequisites first:

  • A namespace in our cluster where to run the upcoming deployment. Using the default namespace is fine for experiments or small setups, but you might want something else. Create this namespace before trying to deploy. This example will use defaultfor simplicity’s sake.
  • Our Kubernetes cluster must be able to pull images from our Gitlab registry:
    • On one side, this requires network access. Normally your K8s cluster should be able to get images from Docker Hub, so if your Gitlab is accessible from the outside world, you will be fine
    • You will also need to provide Kubernetes with credentials to get the image from the Gitlab registry. This requires what is called a pull secret. It must be created in the namespace you have chosen to use above and the kubectl command is as follows: kubectl -n <namespace> create secret docker-registry <secret name> --docker-server=<registry URI> --docker-username=<your Gitlab User> --docker-password=<your Gitlab password> --docker-email=<your email>
  • Files describing our Kubernetes Deployment, service and, if wanted, an ingress:

    • deployment.yaml:
    apiVersion: apps/v1
    kind: Deployment
      name: <choose a deployment name>
      namespace: <Chosen namespace>
        run: <choose a label>
      replicas: <how many replicas we want>
        type: RollingUpdate
      revisionHistoryLimit: 0
          run: <same label as above>
            run: <same label again>
          - name: kerkhoff-web 
            image: <full path to our registry>:latest
            imagePullPolicy: Always
            - containerPort: 80
          - name: <name of the secret created above>
    • service.yaml:
    apiVersion: v1
    kind: Service
      name: <pick a service name>
      namespace: kerkhoff-es
        run: <same label as above>
        run: <same label as above>
      - protocol: TCP
      port: 80
      targetPort: 80
    • ingress.yaml:
    apiVersion: extensions/v1beta1
    kind: Ingress
      name: <chose an ingress name>
      namespace: <Chosen namespace>
      - host: <your public host name>
          - path: /
              serviceName: <Service name from above>
              servicePort: 80

    Of course, you will need to substitute the values between <> appropriately in the files above to suit your setup. Instead of an ingress, you cold also expose the service as type Host Port, but I find the ingress (and then put a multi-name load balancer in front of it) more convenient. In my setup I use PFSense as a Firewall, as has the very efficient HAProxy as an option for load balancing and it provides very straightforward support for Let´s Encrypt certificates with its ACME package.

  • We will also need to add some lines to our .gitlab-ci.yml:

  stage: deploy
  image: <your registry host>/<your registry or repository path>:latest
    - kubectl -n $NAMESPACE apply -f service.yaml
    - kubectl -n $NAMESPACE apply -f dev-ingress.yaml
    - kubectl -n $NAMESPACE apply -f deployment.yaml

and add the deploy stage at the end of the stages section, and at the top in the variables section, we also need to add a variable:

NAMESPACE: <your chosen namespace>

Now, commit all the previous changes, and get ready to run your blog the Devops way. Everytime you edit or add new content, a new Docker image with your content will be built. The line imagePullPolicy: Always will ensure that the image tagged with latest gets pulled when the deployment is updated.

Please do not hesitate asking me anyting about this article in the comments. I know it´s quite long, but I hope you find it at least a bit interesting. It´s the result of several experiments on how to optimize my lab setup over some months ,including detours via Rancher, Docker Swarm and other solutions which I have discarded one after another, because really, at this moment, Kubernetes is without much doubt the leading container orchestration on the horizon.

Share this post on social media:
Volker Kerkhoff avatar
About Volker Kerkhoff
Just another DevOps Engineer. Because international IT Mystery man isn´t a job description
comments powered by Disqus