Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: [Bug] ZSH segmentation fault



matthieu kermagoret wrote:
> Hello !
> 
> I just found a bug in ZSH with double left redirection.
> First I generate a file like this :
> > echo "cat << EOF" > segfault_file
> > for i in `seq 10000`; do echo `printf "%010000d" 0` >> segfault_file; done
> 
> Then I try to make it execute to ZSH like this :
> > zsh < segfault_file
> 
> After a little while ZSH segfaults !

Hmm... you've made the shell use massive amounts of memory and it's
crashed when it didn't have enough.  (On my laptop with 2 gigabytes of
main memory and the same amount of swap this doesn't crash, at least
with the latest version of the shell.)

The problem is that the shell actually keeps the here document in
memory.  This seems a bit lazy because it needs to write it to a
temporary file later anyway.  However, there are case where this is very
difficult to handle any other way.  Consider a shell function:

fn() {
   cat <<EOF
     ... very long text ...
   EOF
}

Obviously this has to be in the shell's memory, since that's the whole
point of a shell function, so in this case the problem isn't fixable.  A
partial kludge for here documents in scripts or on the command line is
very messy and doesn't fix the fundamental problem that you're using a
syntax which potentially asks for unavailable memory.

The only general fix, or at least graceful get out, for crashes like
this is for every memory access in zsh to be error checked and abort if
it fails.  There are very, very many of these and it still doesn't help
you run programmes requiring large amounts of memory.

-- 
Peter Stephenson <p.w.stephenson@xxxxxxxxxxxx>
Web page now at http://homepage.ntlworld.com/p.w.stephenson/



Messages sorted by: Reverse Date, Date, Thread, Author