Недавно у меня возникла проблема, я запускаю mysql на докере с k8s, нагрузка на систему увеличивается за 2 дня с 1-> 500+, информация / var / log / messages:
Aug 17 19:50:07 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:07.435900074+08:00" level=debug msg="Calling GET /v1.26/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:07 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:07.442853242+08:00" level=debug msg="Calling GET /v1.26/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:07 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:07.703904603+08:00" level=debug msg="Calling GET /v1.23/containers/json?limit=0"
Aug 17 19:50:07 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:07.804893547+08:00" level=debug msg="Calling GET /v1.26/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:07 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:07.809574497+08:00" level=debug msg="Calling GET /v1.26/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.427098775+08:00" level=debug msg="Calling GET /version"
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.427932286+08:00" level=debug msg="Calling GET /version"
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.428708265+08:00" level=debug msg="Calling GET /v1.26/version"
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.453153388+08:00" level=debug msg="Calling GET /v1.26/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.456364735+08:00" level=debug msg="Calling GET /v1.26/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0"
Aug 17 19:50:08 SVR5798HW1288 systemd[1]: Starting Cleanup of Temporary Directories...
Aug 17 19:50:08 SVR5798HW1288 dockerd[1890]: time="2018-08-17T19:50:08.656418404+08:00" level=debug msg="Calling GET /v1.26/version"
Aug 17 19:50:08 SVR5798HW1288 systemd[1]: Started Cleanup of Temporary Directories.
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] Linux version 4.10.13-1.el7.elrepo.x86_64 (mockbuild@Build64R7) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Thu Apr 27 12:06:06 EDT 2017
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-4.10.13-1.el7.elrepo.x86_64 root=/dev/mapper/VolGroup00-lv_root ro crashkernel=auto net.ifnames=0 biosdevname=0
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] e820: BIOS-provided physical RAM map:
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009d3ff] usable
Aug 19 08:30:51 SVR5798HW1288 kernel: [ 0.000000] BIOS-e820: [mem 0x000000000009d400-0x000000000009ffff] reserved
18 августа ничего не произошло, я перезагрузил систему 19 августа в 08:30 из-за зависания системы, а не из-за паники ядра и отсутствия журналов, мой вопрос: возможно ли, что система systemd-tmpfiles-clean.service зависает? У меня есть столкнулся с этой проблемой дважды и не знаю, как это сделать.
ядро: 4.10.13
версия systemd: 219