Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: Emulating 'locate'

    Hi Lloyd :)

 * Lloyd Zusman <ljz@xxxxxxxxxx> dixit:
> >     - Update de database regularly. Very regularly, in fact. If files
> > are created and destroyed frequently, you will have to update the
> > database continously... On the average system, anyway, this is not an
> > issue, specially if you look for files that reside on 'stable' parts
> > of the system.
> Well, I generally use the 'locate' command when I want to do a global
> search over my entire system.  I always am aware that it might be
> out-dated, and I go back to 'find' when I want to do a search that is
> up-to-the-moment accurate.

    That's a good deal, because both searches will be fast, and the
second one, the 'find' one, will be issued over a limited set of
directories. In fact I do the same, although instead of using 'find'
I do 'print -l **...', because I type it fast and the speed
difference when dealing with small hierarchies is nelligible ;)

> However, in that case, I target it to a specific directory tree,
> and rarely, if ever recurse down from the root directory unless I
> want to take a long coffee break waiting for results, and I don't
> mind users screaming at me for slowing down the system.

    ;)))))))))) I see you are not a BOFH ;))) Confess it: you like
your users XDDD

> Your locate function would be even better than it already is if you
> could point it at a directory instead of having it always start at root.
> That would be an interesting continuation of this exercise!

    But that's easy. Right now I think of a solution: adding a flag
to start the search from the root, for example, or something like
that. Or easier, even, if the search term starts with a dot, then
strip that dot and do the search 'locally'. The flag is cleaner,

> >> I'm not sure how it compares to this:
> >>   locate() { find / -name "*${^*}*" -print }
> >     This is faster, IMHO, because AFAIK find uses a non-recursive
> > algorithm to recurse the hierarchy. Although I'm not sure about that
> > glob pattern you use, since it will be interpreted by find, not the
> > shell :??
> zsh interprets the ${^*} part in intersperses it between the other two
> asterisks when the shell function is being invoked, and 'find'
> interprets the result.  I think I should have left out the ^, however,
> or probably only used ${1}.

    My fault: since the asterisk won't expand when quoted, like the
example, by brain went to a travel in a fantastic land and thought
that the asterisk in the braces won't be expanded neither... Anyway,
it shouldn't work because, as you suggest, you should have used just
${1}. The ^ is necessary to correctly expand multiple patterns, one
per positional parameter. No matter, really, because you can use the
shell expansion to generate many '-name' options, one per positional

> I just ran a timing test, and unfortunately, 'find' fares better than
> your locate function, which I named 'xlocate' on my system.  Here are
> the results:
>   find / -name specific-file -print   # 15 min 19 sec elapsed
>   xlocate specific-file               # 28 min 40 sec elapsed

    Ooops... Nearly doubles the time... As I said, 'find' uses a
non-recursive approach for finding files, and that is a good point.
In fact, the standard way of finding files is 'find' because it is
faster ;))) I should look at the sources for making sure it is not

    BTW, you seem to have a *really* big set of files...
> > For it to be
> > useful, it must be rewritten to use a database, or something like
> > that...
> Well, I think that there is a way to make it quite good for everyday use
> without having to go so far as to create a database: just come up with a
> way to target the search from a specific directory instead of always
> having to start from root.

    Well, that's a solution, too. Usually I make my searches from my
root directory, that's because I use locate. In my box files are
created and/or deleted sparingly, so I just update the database once
a week.

> If your shell function could take an
> additional first argument, namely the directory under which to start
> searching, it would be great, IMHO.  For example:

    Thanks for the code :))

>   xlocate() {
>     setopt nullglob extendedglob
>     eval print -l ${argv[1]%/}'/**/'${^argv[2,-1]}'{,/**/*}'
>   }

    Nice! :)))
> I removed the asterisks before and after the ${^argv[2,-1]} so I don't
> lose the ability to do the following:
>   xlocate ~ '*.c'   # only matches *.c files under HOME
>   xlocate ~ c       # only matches files named 'c' under HOME

    That's a thing I did yesterday, because I use to write patterns
at my locate, and if I specify a filename I prefer to use that
filename. Thanks again for your help and suggestions :)

    Raúl Núñez de Arenas Coronado

Linux Registered User 88736
http://www.pleyades.net & http://raul.pleyades.net/

Messages sorted by: Reverse Date, Date, Thread, Author