Thanks Laurent for the brief detail. That really helps.
I have checked the Private_Dirty memory in "smaps" of a s6-supervise
process and I don't see any consuming above 8kB. Just posting it here
for reference.
grep Private_Dirty /proc/991/smaps
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 8 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 8 kB
Private_Dirty: 8 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 8 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 4 kB
Private_Dirty: 0 kB
Private_Dirty: 0 kB
cat /proc/991/smaps
00010000-00014000 r-xp 00000000 07:00 174 /bin/s6-supervise
00023000-00024000 r--p 00003000 07:00 174 /bin/s6-supervise
00024000-00025000 rw-p 00004000 07:00 174 /bin/s6-supervise
00025000-00046000 rw-p 00000000 00:00 0 [heap]
b6e1c000-b6e2d000 r-xp 00000000 07:00 3652 /lib/libpthread-2.31.so
b6e2d000-b6e3c000 ---p 00011000 07:00 3652 /lib/libpthread-2.31.so
b6e3c000-b6e3d000 r--p 00010000 07:00 3652 /lib/libpthread-2.31.so
b6e3d000-b6e3e000 rw-p 00011000 07:00 3652 /lib/libpthread-2.31.so
b6e3e000-b6e40000 rw-p 00000000 00:00 0
b6e40000-b6e45000 r-xp 00000000 07:00 3656 /lib/librt-2.31.so
b6e45000-b6e54000 ---p 00005000 07:00 3656 /lib/librt-2.31.so
b6e54000-b6e55000 r--p 00004000 07:00 3656 /lib/librt-2.31.so
b6e55000-b6e56000 rw-p 00005000 07:00 3656 /lib/librt-2.31.so
b6e56000-b6f19000 r-xp 00000000 07:00 3613 /lib/libc-2.31.so
b6f19000-b6f28000 ---p 000c3000 07:00 3613 /lib/libc-2.31.so
b6f28000-b6f2a000 r--p 000c2000 07:00 3613 /lib/libc-2.31.so
b6f2a000-b6f2c000 rw-p 000c4000 07:00 3613 /lib/libc-2.31.so
b6f2c000-b6f2e000 rw-p 00000000 00:00 0
b6f2e000-b6f4d000 r-xp 00000000 07:00 3665 /lib/libskarnet.so.2.9.2.1
b6f4d000-b6f5c000 ---p 0001f000 07:00 3665 /lib/libskarnet.so.2.9.2.1
b6f5c000-b6f5e000 r--p 0001e000 07:00 3665 /lib/libskarnet.so.2.9.2.1
b6f5e000-b6f5f000 rw-p 00020000 07:00 3665 /lib/libskarnet.so.2.9.2.1
b6f5f000-b6f6b000 rw-p 00000000 00:00 0
b6f6b000-b6f81000 r-xp 00000000 07:00 3605 /lib/ld-2.31.so
b6f87000-b6f89000 rw-p 00000000 00:00 0
b6f91000-b6f92000 r--p 00016000 07:00 3605 /lib/ld-2.31.so
b6f92000-b6f93000 rw-p 00017000 07:00 3605 /lib/ld-2.31.so
beaf8000-beb19000 rw-p 00000000 00:00 0 [stack]
Size: 132 kB
Rss: 4 kB
Pss: 4 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 4 kB
Referenced: 4 kB
Anonymous: 4 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me gd ac
becd5000-becd6000 r-xp 00000000 00:00 0 [sigpage]
ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]
Sorry I am not able to post the whole data considering the mail size.
On my Linux system,
ps -axw -o pid,vsz,rss,time,comm | grep s6
1 1732 1128 00:00:06 s6-svscan
900 1736 452 00:00:00 s6-supervise
901 1736 480 00:00:00 s6-supervise
902 1736 444 00:00:00 s6-supervise
903 1736 444 00:00:00 s6-supervise
907 1744 496 00:00:00 s6-log
.....
And I don't think ps_mem is lying , I just compared it with smem as well.
Clear data on ps_mem:
Private + Shared = RAM used Program
4.8 MiB + 786.0 KiB = 5.5 MiB s6-log (46)
12.2 MiB + 2.1 MiB = 14.3 MiB s6-supervise (129)
smem:
PID User Command Swap USS PSS
RSS
1020 root s6-supervise wpa_supplicant 0 96 98
996
2001 root s6-log -F wpa_supplicant.lo 0 104 106
1128
Same(almost) amount of PSS/RSS are used by other s6-supervise and s6-log
processes.
I have tried with flag "--enable-allstatic" and unfortunately I don't see
any improvement. If you were mentioning about shared memory, then yes we
are good here. It is using 2.1 MiB for 129 instances, but the private
memory is around 12.2 MiB. I am not sure whether this is the normal value
or not.
If possible, can you please share us a reference smap and ps_mem data on
s6-supervise. That would really help.
Dewayne, even though we pipe it to a file, we will be having a
s6-supervisor for the log service. Maybe I didn't understand it well. Sorry
about it. Please help me with that.
Thanks,
Arjun
On Wed, Jun 9, 2021 at 8:18 AM Dewayne Geraghty <
dewayne_at_heuristicsystems.com.au> wrote:
> Thanks Laurent, that's really interesting. By comparison, my FBSD
> system uses:
>
> # ps -axw -o pid,vsz,rss,time,comm | grep s6
> virt KB resident cpu total
> 38724 10904 1600 0:00.02 s6-log
> 41848 10788 1552 0:00.03 s6-log
> 42138 10848 1576 0:00.01 s6-log
> 42222 10888 1596 0:00.02 s6-log
> 45878 10784 1516 0:00.00 s6-svscan
> 54453 10792 1544 0:00.00 s6-supervise
> ... lots ...
> 67937 10792 1540 0:00.00 s6-supervise
> 76442 10724 1484 0:00.01 s6-ipcserverd
> 76455 11364 1600 0:00.01 s6-fdholderd
> 84229 10896 712 0:00.01 s6-log
>
> Processes pull-in both ld-elf and libc.so, from procstat -v
> start end path
> 0x1021000 0x122a000 /usr/local/bin/s6-supervise
> 0x801229000 0x80124f000 /libexec/ld-elf.so.1
> 0x801272000 0x80144c000 /lib/libc.so.7
>
> Yes - libc is ... large.
>
> Arjun, if you want to reduce the number of s6-log processes perhaps
> consider piping them to a file which s6-log reads from. For example we
> maintain various web servers, the accesses are unique and of interest to
> customers, but they don't (really) care about the errors so we aggregate
> this with one s6-log. Works very well :)
>
Received on Wed Jun 09 2021 - 05:30:38 CEST