Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: set -F kills read -t



Pardon me while I provide further evidence in support of the theorem that
the best way to get the correct answer from the internet is to post the
wrong answer.

On Mar 18,  3:08pm, Ray Andrews wrote:
}
} Thanks, now I at least know what was busted. I must tread lightly on
} this point because zsh has it's own culture, but from the perspective
} of my C brain, " read -t " ... maybe this, maybe that ... with exactly
} the same input is hard to accept. It's not very robust.

"read -t input" in zsh is pretty nearly equivalent to this C code:

   char input[1024];
   fcntl(0, F_SETFL, O_NONBLOCK);
   read(0, input, 1024);

(except of course that the size of the input is not predetermined).  If
you cojole that C into a runnable program and try it, you'll find that
it behaves just the way "read -t" does.  If you don't want that fcntl()
in there, don't pass the -t option.
 
} > (One could argue that "read" should always erase the parameter to which
} > it was told to write, no matter whether the action of reading succeeds;
} > but that's a different conversation.)

} I'd say it's almost the nub of this conversation. If, as Peter says,
} zsh is asynchronous, and that means that process one might or might
} not be finished before process two, then it seems to me that if there
} is a failure of some sort, then that should be manifested.

It *IS* manifested ... as the return value of "read" ($?).  Which your
function ignored ...  The full situation goes something like this:

  input=START
  if read -t input
  then print "I definitely read input: $input"
  elif (( $#input ))
  then print "Error, input unchanged: $input"
  else print "End of file: $input"
  fi

In most cases you don't care:

  if read -t input
  then print "got \$input: $input"
  else print "got nothing, do not use \$input"
  fi

} Identical runs of identical code should produce identical results, no?

No.  Consider for example:

   for (( count=1; count <= 10000; count++ )) do
     if read -t something
     then print "GOT: $something"; break
     fi
     print $count
   done

This happily counts to 10000, stopping as soon as it gets input.  Exactly
how many numbers are printed before it stops depends on when the input
arrives.  Identical runs of identical code, but not identical results.
If the most important thing is watching it count, then this is exactly
what "read -t" is for.  If the important thing is "GOT: $something",
then don't use -t.

} Or at least warn you if that isn't going to happen. I appeal to the
} doctrine of least surprise.

I appeal to the documentation:

    -t [ NUM ]
          Test if input is available BEFORE ATTEMPTING TO READ.  ...
	  If no input is available, return status 1 AND DO NOT SET
	  ANY VARIABLES.

(emphasis mine, obviously).  What is it that made you believe you need
the -t option in the first place?

} Ok, but in the context of a pipe can't we have 'wait for the input
} that IS coming. Don't wait one second or ten seconds or no seconds,
} wait until the input arrives.

Well, yes.  That's what "read" *without* the -t option does.  That's why
I said:

} > I suspect that what you really want is the answer to the question "is
} > my standard input a pipe?" and to go do something else if it is not.

What is it about piped input that requires different behavior from your
function?

} echo "a string" | func
} 
} should send "a string" to func absolutely every time.

It does send it.  Whether "func" consumes that input is up to the code
inside of "func".

} The very existence of the pipe symbol should say 'wait for it'. Wait
} for 'echo' to return.

But that's not what the pipe symbol means.  It means only "connect the
standard output of (stuff on the left) to the standard input of (stuff
on the right)."

Consider what would happen if "echo" produced a gigabyte of output, or
a terabyte, or a petabyte.  Where is all of that supposed to go while
waiting for echo to return?  Do you always expect your web browser to
download an entire video before beginning to play it?

} It's just a wrapper function. In this test case, around 'grep'.

OK, but the wrapper must be doing *something*.  I mean, this is a
wrapper around grep:

  mygrep() {
    print -u2 "Hi, I'm going to grep now."
    grep "$@"
  }

This will perfectly happily read from a pipe, a file, or a file named
in the arguments, without needing "read" at all.  So what exactly is
your wrapper doing, such that it needs to consume standard input before
grep does?

} But this is a matter of principal. Asynchronous piping seems almost a
} contradiction.

Every shell since the 1970s has worked this way, so you may want to
reconsider which principle is involved. :-)

} Surely each stage of a chain of pipes has a right to expect linear
} travel of data.

If you think of a pipeline as establishing "direction", then yes, the
chain has the right to expect that the data will always flow in the
same direction and (if there is only a single producer) in a fixed
order; but it does not have the right to expect that it will always
start flowing at a particular time and flow at a fixed rate.



Messages sorted by: Reverse Date, Date, Thread, Author