For a while, I have been running a 3-node Docker Swarm. I am very satisfied with it, it does the trick for my personal apps, website and lab setup very well, Kubernetes would be overkill for this sort of setup, and Portainer is an excellent UI for it.

Of course, like any DevOps oriented Systems Engineer, I use the ELBK (Elasticsearch, Logstash, Beats, Kibana) stack for logging and monitoring this setup. I have Docker covered, most of my syslogs go into Elasticsearch very easily with Filebeat or Logstash. This gives me a good insight of my environment. As a load balancer and as a frontend to expose webs and services in my Docker Swarm, I use the excellent Traefik, which integrates very nicely with Swarm and is easily configured with labels when using `docker-compose` or a compose file to deploy services in Swarmpit:

Traefik is run easily on a Docker Swarm with the following docker-compose.yaml. Below you can find a version updated for Traefik v2:

version: '3.3' 
services: 
  traefik: 
    image: traefik:latest 
    command: - "--providers.docker.endpoint=unix:///var/run/docker.sock" - "--providers.docker.swarmMode=true" - "--providers.docker.exposedbydefault=false" - "--providers.docker.network=traefik-net" - "--entrypoints.web.address=:80" - "--metrics" - "--metrics.prometheus" - "--metrics.prometheus.entryPoint=traefik" - "--metrics.prometheus.buckets=0.1,0.3,1.2,5.0 " - "--accessLog.format=json" - "--api=true" - "--api.dashboard=true" - "--api.insecure=true" 
    ports: 
      - 80:80 
      - 8080:8080 
    volumes: 
      - /var/run/docker.sock:/var/run/docker.sock 
    networks: 
      - traefik-net 
    deploy: 
      mode: global 
      placement: 
        constraints: 
          - node.role == manager 
  networks: 
    traefik-net: 
      external: true

I have also modified this to run only on the 3 manager nodes of my swarm. Why? Because the swarmpit stack is running on manager nodes, and I want to be able to access it even when my worker nodes are down, or drained for whatever reason. There is also an external network, which is shared with all application backends that need to be serverd by Traefik. See a sample below:

 labels: 
   traefik.backend.loadbalancer.swarm: "true" 
   traefik.backend: "cool_backend_name" 
   traefik.docker.network: "traefik-net" 
   traefik.frontend.passHostHeader: "true" 
   traefik.frontend.rule: "Host:host1.com, www.host1.com" 
   traefik.port: 80

I include just the Labels part of the service/stack in the docker-compose.yml file, but you probably get the idea.  

You will notice that I am not using the built-in HTTPS functionality or the ACME (Let's Encrypt) functionality, because PfSense and the built-in HAProxy take care of my SSL termination and certificate management.

Now to what bothers me: Filebeat has an excellent module for Traefik logs, including nice visualizations and dashboards, but the logs are in `/var/lib/docker/containers/<container_id>` and the <container_id> has the habit of changing when I restart or update Traefik to a newer version. So accessing these logs directly in Filebeat isn't really an option, because it would ingest the logs of ALL containers, and the Filebeat plugin I would have to use is the `docker` one, which wouldn't import the Traefik Access log with the proper field mappings, which is what I was after.

Sending the Traefik logs directly to Elasticsearch / Logstash also wasn't an option for the same reason as above: field mapping. So I thought, how can I get the logs out of the container and into filebeat?

Well, volumes to the rescue, step by step:

1. Install filebeat on all your Swarm nodes (if not already done).

2. Create a directory in a location of your choice in the filesystem of each Docker node. I chose `/data/traefik/` but feel free to pick your own.

3. Now we need to modify our traefik service to write its access logs to said pipe. First we add a the directory where the pipe is as a volume:

volumes: 
  - type: bind 
      source: /var/run/docker.sock 
      target: /var/run/docker.sock 
      - /data/traefik:/log

and then we tell traefik to log to that pipe. We need to use the "common" format for Filebeat to work:

command: --docker --docker.swarmMode --docker.domain=conxtor.com --docker.watch --api --accessLog.format="common" --accessLog.filePath="/log/traefik.log"

4. Configure Filebeat (Elasticsearch output)

5. Enable the Traefik module in Filebeat on each Docker node:

$> filebeat modules enable traefik

6. Configure the Filebeat Traefik module in the file /etc/filebeat/modules.d/traefik.yml on each Docker node (change directory accordingly to your setup):

module: traefik 
# Access logs 
access: enabled: true 
# Set custom paths for the log files. If left empty, 
# Filebeat will choose the paths depending on your OS. 
var.paths: \["/data/traefik/*"\] 

7. Start and enable Filebeat:

$> systemctl enable filebeat 
$> systemctl start filebeat

8. We need to setup log rotation for our freshly added logfiles on each Docker node to prevent our filesystem from filling up. Traefik will rotate the log file when it receives the USR1 signal, but we need to send it to it inside the container, so this requires some additional (elec)trickery:
- Add a file named `traefik` in your `/etc/logrotate.d` directory with the following content:

/data/traefik/*.log 
{ su root root 
hourly 
rotate 30 
missingok 
notifempty 
delaycompress 
dateext 
dateformat .%Y\-%m\-%d 
create 0644 root root 
postrotate docker kill \-\-signal="USR1" $(docker ps | grep traefik | awk '{print $1}') 
endscript }

And there you go, nice logs and dashboard in Kibana: