Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: more problems w/ irix



On Nov 13, 10:23am, Ray Jones wrote:
} Subject: more problems w/ irix
}
} to test it further and see how it responds when given more files than
} OPEN_MAX, i tried the "cat < * > /dev/null" in a directory w/ about
} 4000 files (OPEN_MAX is 2500).  now i'm getting a crash in halloc(),
} or (if i turn enable zsh_mem) malloc().

Different bug, I suspect.  Note that "cat < *" is the same as writing
"cat < file1 < file2 < file3 < file4 ..." and is in fact converted by
xpandredir() [glob.c] to exactly the same syntax tree that you'd get
if you typed out all the redirections explicitly.

Later on, in execcmd() [exec.c], when the redirections are processed,
there is no check done to be sure that there are not more redirections
attempted than the current value of OPEN_MAX (fdtable_size after PWS's
patch).  There *is* a test for whether open() returns failure, but then
in addfd() zsh pretty much ignores failures of dup() via movefd(); and
addfd() is itself void, so execcmd() has no chance.

It ould be the case that IRIX doesn't obey its own limits on number of
open files -- if open() or dup() continues to succeed even after the
soft limit on number of files is reached, zsh happily continues adding
fds beyond the end of fdtable[], scribbling on random heap.

You should try compiling with --enable-zsh-debug and watch for Peter's
"fdtable too small" warnings.  I bet you'll get them.

If you do get them, then the fdtable *does* need to grow dynamically
as fds are added to it; it can't be preallocated at the time limits are
examined.  But we don't know that for sure yet.

-- 
Bart Schaefer                             Brass Lantern Enterprises
http://www.well.com/user/barts            http://www.nbn.com/people/lantern



Messages sorted by: Reverse Date, Date, Thread, Author