Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: serverizing a fat process with named pipes



On Jun 15, 11:17pm, Alexy Khrabrov wrote:
}
} I have a heavy process, an English parser loading megabytes of models,
} and then reading stdin, sentence per line, outputting the parsed text
} to the stdout.  How do I properly serverize it -- running in the
} background with <p1 >p2, those created with mkfifo p1 p2?

As PWS mentioned in his reply, the nature of FIFOs is that the reader
blocks until a writer opens it, and then the reader gets EOF when the
writer closes it (and the reader has consumed all the data).  If the
parser is designed to exit upon EOF, then you can't "serverize" it
with FIFOs without repeatedly respawning it.

If it does NOT exit on EOF, then something still needs to re-open the
FIFO each time, because once the original writer exits the reader will
always see EOF on the FIFO.

You may be able to work around this as follows:

exec {fd}>p1	# Open p1 for writing, but don't write anything
cat <p1 >p2	# Replace "cat" with your server process

Now other writers can open p1 and send lines to it and even close it
again, but "cat" never sees EOF because the shell is holding open $fd.

There are still various buffering issues including making sure that
someone is appropriately reading from p2, so you might want to look
into using inetd or the equivalent to run the process as a service
on a socket with a well-known port number instead.



Messages sorted by: Reverse Date, Date, Thread, Author