Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

An idea for fast "last-N-lines" read



Hello
I read somewhere that to read "last-N-lines" it is good to memory-map
the file. Cannot check with Zsh:

    for (( i=size; i>=1; --i)); do
        if [[ ${${mapfile[input.db]}[i]} = $'\n' ]]; then
            echo Got newline / $SECONDS
    ...

This gives:

Got newline / 0.1383100000
Got newline / 16.0876810000
Got newline / 26.8089250000

for 2 MB file – apparently because it memory-maps the file on each
newline check.

So the idea is to add such feature. It would allow to run Zsh on
machines where e.g. periodic random check of last 1000 lines of
gigabyte-logs would be needed. It's possible that even Perl doesn't have
this. I'm thinking about: $(<10<filepath) syntax, that would return
buffer with 10 last lines, to be splited with (@f). Or maybe
$(<10L<filepath) for lines, and $(<10<filepath) for bytes. Maybe it's
easy to add? Otherwise, extension to zsh/mapfile could be added, or a
new module written.

BTW, (@f) skips trailing \n\n... That's quite problematic and there's
probably no workaround?
-- 
  Sebastian Gniazdowski
  psprint3@xxxxxxxxxxxx



Messages sorted by: Reverse Date, Date, Thread, Author