Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: question about zargs

On Wed, 31 Oct 2012 21:40:07 +0800
Han Pingtian <hanpt@xxxxxxxxxxxxxxxxxx> wrote:
> I just learnt that there is a function 'zargs' which just like
> 'xargs'. So why we need the 'zargs' as we have 'xargs' already? 
> As a example, this works just fine with 'xargs':
>     % print -N **/* | xargs -n1 -0 ls

It works, but with more processes.  zargs allows you to have things
(though not ls) running completely in the shell.  In that case, you
aren't sensitive to the size of the argument list passed to an external

> But this one will cause "(eval):2: fork failed: cannot allocate
> memory" error on my laptop:
>     % zargs **/* -- ls

ls is run as an external process, so the argument list is limited.
zargs doesn't have a builtin system-dependent limit to the number it'll
pass in one go, unlike xargs, as far as I can see, so needs to be told
how to limit it.

Hmm... in principle, you can do:

zargs -n2 **/* -- ls

so that it executes ls and one additional argument each time.  This is
equivalent to the -n1 you gave to xargs.

However, with a lot of iles this is running incredibly slowly for me and
after a few dozen files have been processed I hit:

279: mem.c:1180: MEM: allocation error at sbrk, size 589824.
zargs:279: fatal error: out of memory

(Luckily this is in a subshell.)

It looks like this is hitting some pathology in memory management to do
with the argument array (which is being shifted at that line).


Messages sorted by: Reverse Date, Date, Thread, Author