Docker logs
If you have been running a lot of docker containers for quite some time, you will realize that the logs takes up a lot of space.
Currently, I have 25 running containers on a bare metal server. The storage is quite low (~300 GiB) but I never ran into issues with the containers until a few days back.
Some containers were failing
Lots of containers with databases were failing (thanks to uptime kuma). I looked at the logs and found out a constant stream of No Space Left
which was odd.
Also, a reminder to run
docker system prune
every now and then.
I was not running any tasks which might require a lot of disk space. First, I checked with ncdu
. I found nothing unusual.
As a temporary solution, I transfered around ~50 GiB of data to another server just so I can run those containers.
The normal ncdu
on root directory doesn’t look at other paths such as /var
which is obvious. So, I missed that path.
After looking up for an hour, I realized that the real culprit has to do something with docker because that’s what I use the most on this server.
First, I looked at the volumes. I have a mixed use of either bind mount or docker volume.
I looked at both but unfortunately, did not find anything interesting.
Finally, I stumbled upon the docker logs folder which is at /var/lib/docker/containers
. It was taking up around 106 GiB of space.
sudo sh -c "du -ch /var/lib/docker/containers/*/*-json.log | grep total"
To delete all the logs, I ran the following:
truncate -s 0 /var/lib/docker/containers/**/*-json.log
Sure enough, I now had plenty of disk space. It was a relief. Now, it’s time to set log rotate for all the containers. Important lesson.
Log rotate
In the official docs, to cap the log file, create daemon.json
at /etc/docker
with following:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
You can change the max-size
to your own preferred value. Stop the containers, reload and restart docker. That will apply the changes.