Traefik on Docker Swarm and Filebeat - a logging problem

For a while, I have been running a 3-node Docker Swarm. I am very satisfied with it, it does the trick for my personal apps, website and lab setup very well, Kubernetes would be overkill for this sort of setup, and Portainer is an excellent UI for it.

Of course, like any DevOps oriented Systems Engineer, I use the ELBK (Elasticsearch, Logstash, Beats, Kibana) stack for logging and monitoring this setup. I have Docker covered, most of my syslogs go into Elasticsearch very easily with Filebeat or Logstash. This gives me a good insight of my environment.

As a load balancer and as a frontend to expose webs and services in my Docker Swarm, I use the excellent Traefik, which integrates very nicely with Swarm and is easily configured with labels when using docker-compose or a compose file to deply services in Portainer:

Traefik is run easily on a Docker Swarm with the following docker-compose.yaml file:

version: '3.2'
services:  
  traefik:    
    image: traefik 
    deploy:      
      mode: global 
    command: --docker --docker.swarmMode --docker.domain=<my Docker domain> --docker.watch --api --accessLog.format="json"
    networks:      
      - traefik-net  
    ports:      
      - "80:80"      
      - "8080:8080"
    volumes:
      - type: bind
        source: /var/run/docker.sock
        target: /var/run/docker.sock
networks:  
  traefik-net:
    external:
      name: traefik-net

Note that this is a global service, so it will run on all worker nodes in the swarm cluster, esposing both ports 80 (for content) and port 8080 (for the Traefik dashboard). There is also an external network, which is shared with all application backends that need to be serverd by Traefik. See a sample below:

      labels:
        traefik.backend.loadbalancer.swarm: "true"
        traefik.backend: "cool_baskend_name"
        traefik.docker.network: "traefik-net"
        traefik.frontend.passHostHeader: "true"
        traefik.frontend.rule: "Host:host1.com, www.host1.com"
        traefik.port: 80

I include just the Labels part of the service, but you probebly get the idea

You will notice that I am not using the built-in HTTPS functionality or the ACME (Let’s Encrypt) functionality, because PfSense and the built-in HAProxy take care of my SSL termination and certificate management.

Now to what bothers me: Filebeat has an excellent module for Traefik logs, including nice visualizations and dashboards, but the logs are in /var/lib/docker/containers/<container_id> and the has the habit of changing when I restart or update Traefik to a newer version. So accessing these logs directly in Filebeat isn’t really an option, because it would ingest the logs of ALL containers, and the Filebeat plugin I would have to use is the docker one, which wouldn’t import the Traefik Access log with the proper field mappings, which is what I was after.

Sending the Traefik logs directly to Elasticsearch / Logstash also wasn’t an option for the same reason as above: field mapping.

So I thought, how can I get the logs out of the container and into filebeat?

Well, volumes to the rescue, step by step:

  1. Install filebeat on all your Swarm nodes (if not already done).
  2. Create a directory in a location of your choice in the filesystem of each Docker node. I chose /data/traefik/ but feel free to pick your own.
  3. Now we need to modify our traefik service to write its access logs to said pipe. First we add a the directory where the pipe is as a volume: ~~~yaml volumes: - type: bind source: /var/run/docker.sock target: /var/run/docker.sock - /data/traefik:/log ~~~ and then we tell traefik to log to that pipe. We need to use the “common” format for Filebeat to work: ~~~ command: –docker –docker.swarmMode –docker.domain=conxtor.com –docker.watch –api –accessLog.format=“common” –accessLog.filePath=“/log/traefik.log” ~~~
  4. Configure Filebeat (Elasticsearch output)
  5. Enable the Traefik module in Filebeat on each Docker node: filebeat modules enable traefik
  6. Configure the Filebeat Traefik module in the file /etc/filebeat/modules.d/traefik.yml on each Docker node (change directory accordingly to your setup):

    module: traefik
    # Access logs
    access:
      enabled: true
    
      # Set custom paths for the log files. If left empty,
      # Filebeat will choose the paths depending on your OS.
      var.paths: \["/data/traefik/*"\]
    
  7. Start and enable Filebeat:

    systemctl enable filebeat
    systemctl start filebeat
    
  8. We need to setup log rotation for our freshly added logfiles on each Docker node to prevent our filesystem from filling up. Traefik will rotate the log file when it receives the USR1 signal, but we need to send it to it inside the container, so this requires some additional (elec)trickery:

  9. Add a file named traefik in your /etc/logrotate.d directory with the following content:

    /data/traefik/*.log \{
      su root root
      hourly
      rotate 30
      missingok
      notifempty
      delaycompress
      dateext
      dateformat .%Y\-%m\-%d
      create 0644 root root
      postrotate
       docker kill \-\-signal="USR1" $\(docker ps | grep traefik | awk '\{print $1\}'\)
      endscript
    \}
    

And that’s it. Perfect Traefik logs with a nice Kibana dashboard: Traefik 1

Traefik 2

Share this post on social media:
 
Volker Kerkhoff avatar
About Volker Kerkhoff
Just another DevOps Engineer. Because international IT Mystery man isn´t a job description
comments powered by Disqus