On 2002-08-24 at 03:47 +0000, Bart Schaefer wrote:
> On Aug 23, 8:22pm, Phil Pennock wrote:
Most of my mail was curiosity. I'm not too sure of most of this, hence
I didn't CC the list but instead made it a private reply. Sorry for any
offense caused -- I see that I didn't tone my wording well.
> there might be a problem with the "ls ${c}" in my solution, but the spec
> said "do an ls" not "do a print -l", and your solution passes exactly the
> same number of arguments to "print -l --" as mine passes to "ls").
True, but it was "ls -1" so I added another optimisation, to avoid the
extra stat()s.
> In other words, you might have reason to be cautious about anything that
> *expands* a list, but just building one (i.e., array assignment) should
> not be an issue unless you're hitting stacksize or memoryuse limits.
Noted. I still try to avoid risking hitting stacksize limits, though.
Influence of years ago doing Comp Sci work on an Amiga, when others used
Unix. I tend to try to explicitly free memory that I allocate. I try
to avoid techniques which chew a lot of stack or other RAM. I might do
something quick and dirty, but will try to fix it before it goes into a
script.
> The size of the environment also has an effect -- you might try exporting
> as little as possible, if you frequently hit argv limits.
Ah, it's usually not envp subtracting from argv, but more a case of
having my mind tracking a few things, finding no tags file for
/usr/src/sys/ on a BSD system and trying a grep for something across all
the kernel source, with **/*.[chyl] -- the failure leads to my blinking,
grumbling, writing a find/xargs version and then trying to remember what
I was looking at before.
Also dealing with mailboxes which have been forged as the envelope
sender in spam. 60,000 mails, two files each. I wrote tools to handle
it more efficiently in Perl, since the regular easy shell stuff kept
barfing.
> I timed your solution and mine using repeated runs on about 400 files
> (after changing mine to also use the "print" builtin) and they're almost
> exactly the same. Yours uses a little more system time, mine a little
> more user time (file tests vs. string manipulation, I suppose).
*nods* Thanks. Sorry, I was asking more if you knew of the top of your
head; I should have said so, to avoid you running tests which I was too
lazy to try.
> } Which leads to a question: how much hassle is it to have a glob modifier
> } be able to duplicate the Simple Command which is calling it?
>
> A glob modifier, just about impossible. A precommand modifier or option,
> perhaps. The problem is, by the time the E2BIG error comes back from
> execve(2), it's too late to do much except croak -- so zsh would need a
> heuristic to predict whether/how to split up the arguments, so it could
> be done sooner.
Are there sufficient hooks to allow this to be done as a module? I have
enough interest in this to actually go back to looking at zsh internals
and writing a module. sysconf(_SC_ARG_MAX) should return the space
available. More overhead in tracking the sizes of all strings
explicitly and summing them before a command, but then that can be
restricted to just the case when a glob is used, so it wouldn't normally
slow things.
But, uhm, not for a month or so.
--
"Markets can remain irrational longer than you can remain solvent"
-- John Maynard Keynes
Attachment:
pgpEiLlf6aXN2.pgp
Description: PGP signature