Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: OPEN_MAX from sysconf



"Bart Schaefer" <schaefer@xxxxxxxxxxxxxxxxxxxxxxx> writes:

> I don't think it's worth the effort.

Maybe not, but the current situation is not only optimal.

> In practice there's always going to be *some* hard limit, even if
> it's in the thousands; a few thousand chars allocated isn't going to
> make that much difference to zsh.

A somewhat fabricated example (this is with the patch):

-------------------------------------
bash# uname -mrs
NetBSD 1.2 i386
bash# sysctl kern.maxfiles
kern.maxfiles = 1772
bash# sysctl -w kern.maxfiles=1000000000
kern.maxfiles: 1772 -> 1000000000
bash# ulimit -Hn unlimited
bash# ulimit -n unlimited
bash# zsh
zsh: fatal error: out of memory
bash# ulimit -n 1000
bash# zsh
zsh# limit -h
cputime         unlimited
filesize        unlimited
datasize        256MB
stacksize       8MB
coredumpsize    unlimited
memoryuse       16MB
memorylocked    16MB
maxproc         unlimited
descriptors     1000000000
zsh# limit descriptors 1000000000
zsh# uname
zsh: segmentation fault  uname
-------------------------------------

/Johan



Messages sorted by: Reverse Date, Date, Thread, Author