Skip to content

About log file size #365

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
hyer opened this issue May 8, 2017 · 6 comments
Closed

About log file size #365

hyer opened this issue May 8, 2017 · 6 comments
Milestone

Comments

@hyer
Copy link

hyer commented May 8, 2017

API-Umbrella is great. However, I find the log file increase quickly in my server. I deploy API-Umbrella by docker with default settings, and I found the log file is really huge. I search the big file in system, and get:
root@iZwz99t8ocjfg76qbpwasxZ:~# find / -size +100M |xargs ls -lh find: /proc/4506/task/4506/fd/5': No such file or directory
find: /proc/4506/task/4506/fdinfo/5': No such file or directory find: /proc/4506/fd/5': No such file or directory
find: /proc/4506/fdinfo/5': No such file or directory -r-------- 1 root root 128T May 8 15:28 /proc/kcore -rw-r--r-- 1 999 999 256M May 8 14:56 /var/lib/docker/aufs/diff/d4205da8f48d0c860ab9881968b784db27ba87a80533304ff1fcc9adb76b23dc/opt/api-umbrella/var/trafficserver/cache.db -rw-r--r-- 1 999 999 256M May 8 14:56 /var/lib/docker/aufs/mnt/d4205da8f48d0c860ab9881968b784db27ba87a80533304ff1fcc9adb76b23dc/opt/api-umbrella/var/trafficserver/cache.db -rw-rw-rw- 1 999 999 613M May 5 03:00 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/0/index/_5042.fdt -rw-rw-rw- 1 999 999 165M May 5 03:01 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/0/index/_5042_Lucene50_0.tim -rw-rw-rw- 1 999 999 104M May 5 09:07 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/0/index/_5im7.cfs -rw-rw-rw- 1 999 999 123M May 5 15:38 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/0/index/_62dw.fdt -rw-rw-rw- 1 999 999 640M May 5 04:38 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/1/index/_550q.fdt -rw-rw-rw- 1 999 999 172M May 5 04:39 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/1/index/_550q_Lucene50_0.tim -rw-rw-rw- 1 999 999 106M May 5 10:54 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/1/index/_5nzs.cfs -rw-rw-rw- 1 999 999 137M May 5 17:47 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/1/index/_68vs.fdt -rw-rw-rw- 1 999 999 610M May 5 02:49 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/2/index/_4zk1.fdt -rw-rw-rw- 1 999 999 164M May 5 02:50 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/2/index/_4zk1_Lucene50_0.tim -rw-rw-rw- 1 999 999 104M May 5 08:55 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/2/index/_5hzz.cfs -rw-rw-rw- 1 999 999 123M May 5 15:21 /var/lib/docker/vfs/dir/916ef7627dd8673e58a5b959596f3df9e06b4dca7cf5fee1602fa3c19eb48e79/elasticsearch/api-umbrella/nodes/0/indices/api-umbrella-logs-v1-2017-05/2/index/_61ih.fdt -rw-r--r-- 1 999 999 21G May 8 15:28 /var/lib/docker/vfs/dir/b16612ae86d71876b7fb38ce8ddd44ade2b1d70e7d2c37a87c2dd8ca5950744f/nginx/access.log -rw-r--r-- 1 999 999 1.2G May 5 20:57 /var/lib/docker/vfs/dir/b16612ae86d71876b7fb38ce8ddd44ade2b1d70e7d2c37a87c2dd8ca5950744f/rsyslog/requests.log.gz -rw-r--r-- 1 999 999 7.1G May 8 15:28 /var/lib/docker/vfs/dir/b16612ae86d71876b7fb38ce8ddd44ade2b1d70e7d2c37a87c2dd8ca5950744f/trafficserver/access.blog

I wonder if there is any way to setting the log file's size? Thanks.

@martinzuern
Copy link
Contributor

martinzuern commented May 8, 2017

Well you could write a cron job to move the log files every couple of days/hours and delete those files after x days.

@hyer
Copy link
Author

hyer commented May 9, 2017

OK, thanks. However, It would be better if we can config the log file size.

@GUI
Copy link
Member

GUI commented May 10, 2017

The API Umbrella package includes a default configuration file for logrotate, which will take care of compressing and removing old files. By default, it compresses everything older than 1 day, and keeps all the logs for 90 days. So it's based on time, rather than file size, but it at least helps keep things from growing indefinitely.

However, it looks like we're perhaps not installing logrotate as a dependency inside the Docker container (it's installed on most distros by default, but not in the more minimal docker environments). Since it looks like you're running the Docker version, I suspect this might explain why you're seeing log files growing indefinitely in size.

It would be easy for us to update the docker build to include logrotate, which would help keep these file sizes in check. However, that's perhaps not the most Docker-ish solution, so as a more proper fix, we might want to see if we can redirect all our log output to STDOUT/STDERR, and then you can manage that output with Docker (but that will require more updates on our end).

@hyer: So from your perspective, would keeping the files compressed and rotated with logrotate and the default settings (keeping files for 90 days) be sufficient? Or are you managing your Docker logs in some other way, and would logging everything to STDOUT/STDERR be preferable? Or would logrotate work, but with different settings (eg, less than 90 days or based on file size instead)? Or any other ideas?

Thanks for bringing this to our attention!

@hyer
Copy link
Author

hyer commented May 10, 2017

Keeping the files compressed and rotated with logrotate would be sufficient. And I think this should be default settings to avoid the problem with disk capacity. As newbie may neglect this problem until finding the server is crashed.
Thanks.

@martinzuern
Copy link
Contributor

martinzuern commented Jun 2, 2017

Actually as @GUI mention routing the logs to STDOUT/STDERR would be a more Docker-ish solution, and isn't too complicated. E.g. nginx solves it by simply symlinking the log files to STDOUT/STDERR:

# forward request and error logs to docker log collector
RUN mkdir --parents /opt/api-umbrella/var/log/nginx \
  && ln -sf /proc/1/fd/1 /opt/api-umbrella/var/log/nginx/access.log \
  && ln -sf /proc/1/fd/1 /opt/api-umbrella/var/log/nginx/current \
  && ln -sf /proc/1/fd/2 /opt/api-umbrella/var/log/nginx/error.log

There is also a note on the official Docker documentation.

Problem in this case is probably the setup with svlogd, we'd actually need something that pipes all log output from there to /proc/1/fd/1 and adds a tag for every line from what service that log came from.
@GUI Any ideas to approach that best?

GUI added a commit that referenced this issue Jul 12, 2017
While not the best solution, this should help ensure log files get
rotate inside docker containers:
#365
@GUI GUI added this to the v0.14.3 milestone Jul 13, 2017
@GUI
Copy link
Member

GUI commented Jul 13, 2017

Sorry I missed this in our last v0.14.2 release. But we just released v0.14.3 which adds logrotate as an explicit dependency, which I think should hopefully prevent this kind of unbounded log growth inside the Docker container. But let us know if you're still seeing any issues with the Docker environment and log files not rotating.

But as noted, the more ideal approach is probably to load all output to STDOUT/STDERR. That part is not addressed yet, but I've opened a separate issue for that at #376

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants