Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: rolling over high-traffic logfiles?



On Jul 15, 11:49pm, Sweth Chandramouli wrote:
} Subject: rolling over high-traffic logfiles?
}
} 	is there some easy way to roll over high-traffic logfiles
} in zsh, without losing any possible incoming data?

There isn't any magic for this that is specific to zsh.

} can zsh do some sort of file locking here?

It wouldn't help.  The processes that are writing to the log file would
also have to use the equivalent locking.  (There is such a thing in unix
as "mandatory" file locking, but using that would likely just cause the
logging processes to get errors.  All other locking is "advisory," which
means all processes involved must have been written so as to use the same
cooperative calls.)

} does anyone have any other ideas on how to not lose and log entires?

Use "ln" and "mv" to replace the file, rather than "cp" over it.

	cp /dev/null extlog.new &&
	ln extlog extlog.`date` &&
	mv -f extlog.new extlog ||
	rm -f extlog.new

There's still a race condition where some process could attempt to write
to extlog during the execution of "mv", that is, between unlinking the
old extlog and renaming extlog.new to extlog.  However, the window for
failure is much smaller, and could be made smaller still by using the
"files" module with zsh 3.1.4 so that "ln" and "mv" are shell builtins.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com



Messages sorted by: Reverse Date, Date, Thread, Author