Zsh Mailing List Archive
Messages sorted by:
Re: set -F kills read -t
On 03/18/2014 06:17 PM, Bart Schaefer wrote:
Pardon me while I provide further evidence in support of the theorem that
the best way to get the correct answer from the internet is to post the
Who's answer was wrong?
Ok, then how else would one write a function that could use arguments
*or* piped input? grep
does it, and doesn't need an arbitrary wait. Again, in *practice* the "
-t 1 " seems perfectly good
enough, I don't want to belabour this, it's a theoretical/philosophical
(except of course that the size of the input is not predetermined). If
you cojole that C into a runnable program and try it, you'll find that
it behaves just the way "read -t" does. If you don't want that fcntl()
in there, don't pass the -t option.
if read -t input
then print "I definitely read input: $input"
elif (( $#input ))
then print "Error, input unchanged: $input"
else print "End of file: $input"
Ok, that at least covers the bases if the read failed
Point made. That's well and good, I can see that " -t " would be exactly
right in that situation, but in my
} Identical runs of identical code should produce identical results, no?
No. Consider for example:
for (( count=1; count <= 10000; count++ )) do
if read -t something
then print "GOT: $something"; break
This happily counts to 10000, stopping as soon as it gets input. Exactly
how many numbers are printed before it stops depends on when the input
arrives. Identical runs of identical code, but not identical results.
If the most important thing is watching it count, then this is exactly
what "read -t" is for. If the important thing is "GOT: $something",
then don't use -t.
pipe situation, it's not exactly right.
Yeah ... I guess in my still synchronous and 'blocking' mind, when
there's a pipe there is
*always* input so always 'available'. It will take some getting used to
the idea that the
various steps in a long sequence of piped functions can quit any time
they like in any
} Or at least warn you if that isn't going to happen. I appeal to the
} doctrine of least surprise.
I appeal to the documentation:
-t [ NUM ]
Test if input is available BEFORE ATTEMPTING TO READ. ...
If no input is available, return status 1 AND DO NOT SET
order that happens to happen.
A valid point. I have no issue that " -t [seconds to wait] " is
available when needed, but in
Consider what would happen if "echo" produced a gigabyte of output, or
a terabyte, or a petabyte. Where is all of that supposed to go while
waiting for echo to return? Do you always expect your web browser to
download an entire video before beginning to play it?
the case of:
echo "a string" | func
I hardly think that func should not politely wait for "a string", and as
my tests showed,
sometimes it didn't. I dunno, maybe there is no way that 'read' can be
something to the left of it in a pipe situation has returned, so what
I'm wanting might
be impossible. If it was possible it would be the '-p' switch: 'pipe
mode' ... wait for piped
input to finish.
} It's just a wrapper function. In this test case, around 'grep'.
OK, but the wrapper must be doing *something*. I mean, this is a
wrapper around grep:
print -u2 "Hi, I'm going to grep now."
This will perfectly happily read from a pipe, a file, or a file named
in the arguments, without needing "read" at all. So what exactly is
your wrapper doing, such that it needs to consume standard input before
Yabut, in the pipe situation you don't supply a filespec ....
Shoot. All this 'read' stuff has been a colossal mistake. Damn,
everything I read on the
internet said 'read' was the way to go. HAAA, which get's back to your
theorem! I see
now how I've been barking up the wrong tree. It never even occurred to
me that " $@ "
would soak up piped input, I thought " $@ " stuff had to be arguments
after the command <:-(
Sorry Bart, I'm a long study.
Messages sorted by: