Re: new s6-log

From: Olivier Brunel <jjk_at_jjacky.com>
Date: Sun, 08 Feb 2015 22:12:32 +0100

On 02/07/15 17:17, Laurent Bercot wrote:
> On 27/01/2015 01:10, Laurent Bercot wrote:
>> Something like that. I haven't given it any thought yet
>
> So I've experimented a lot with that. I've put in knobs
> and turned and tweaked them, I've put guards on the number of
> forks per second, and guards on the last time since a fork
> happened, and I've made it possible to fork several handlers
> at a time or only one handler at a time, etc. etc.
>
> Nothing is satisfying. The overwhelming conclusion to my
> experiments was: it's doable, but safety is in direct opposition
> to reliability here - the functionality has conflicting goals.
> Either the user risks forking a zerg swarm, or he risks missing
> log lines. It's possible to still get notification of a missed
> log line even without the content, but it's not possible to predict
> what content will be missing.
> Unix was clearly not made for this. You definitely can't safely
> and reliably make a control flow depend on an arbitrary data flow.
>
> So, I did what I usually do in those cases: push the problem onto
> the users. :P
> I went back to the "log to a file" approach, but of course I didn't
> make s6-log write to files. I just added a "log to stdout" directive.
>
> So if you want to be notified when a certain line arrives, you just
> start s6-log by pipelining it into a long-lived program that notifies
> you when it gets a line; the s6-log instance should have a script
> that, among other things, selects the interesting line and sends it to
> stdout.
> (If s6-log is supervised, when you pipeline, make sure s6-log gets
> the run script's pid.)

Just did some quick tests, and it seems I'll be able to do what I need
just fine with this, yes...

> That's one long-lived program that reads lines and does stuff, instead
> of a flurry of short-lived programs. I feel a lot more comfortable with
> that. The program can even die when it's not interested anymore, s6-log
> will just stop storing stuff to send to stdout after a failed write.

Except for that bit. I don't like that, and I'd really like an option to
turn that behavior off. Specifically, in my case the only scenario I can
imagine where the write would fail, i.e. daemon is down, would be
because it crashed/was restarted; Neither of which should mean to stop
the whole process.
I can understand that because it crashed/was restarting some messages
coming in at that exact time be lost/not processed, but that certainly
shouldn't mean no other messages should be sent ever again! (At least,
not in my case, so as a default why not, as long as I can turn it off.)

> I've used the opportunity to overhaul s6-log and make some changes.
> Notably, there can now be ISO 8601 timestamps - for the log contents,
> not for the archive filenames. Less notably, but artistically nice:
> the script now lives in the stack, not in the heap.

Little bug in the refactoring though it seems, tainstamps aren't valid
no more; See patch.

> Available on the current s6 git head. (Needs the current skalibs git
> head.) Please tell me if it works for you.
>
Received on Sun Feb 08 2015 - 21:12:32 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:38:49 UTC