Zsh Mailing List Archive
Messages sorted by: Reverse Date, Date, Thread, Author

Re: Improving zcompdump (Re: A patch with hashtable optimization, which doesn't work)

On 20 maja 2017 at 19:08:09, Bart Schaefer (schaefer@xxxxxxxxxxxxxxxx) wrote:
> } Tried to optimize mkautofn, to speed up sourcing zcompdump.
> How much does zcompdump actually help? Have you compared startup with
> and without it?
> There's a bunch of stuff in .zcompdump. Have you investigated whether
> certain parts of it are slower than others?
> One lesson learned with Completion/Base/Utility/_store_cache is that
> parsing array assignments is expensive.

I've wrapped sourcing zcompdump in compinit this way:

      zmodload zsh/zprof
      () {
          builtin . "$_comp_dumpfile"
      zprof | head -n 14

Then I tried with a) normal .zcompdump, and b) with modification – with _comps=( ), i.e. empty. Results seem to confirm what you said:

num  calls                time                       self            name
 1)    1          58,93    58,93  100,00%     58,93    58,93  100,00%  (anon)


 1)    1          12,81    12,81  100,00%     12,81    12,81  100,00%  (anon)

There's 58-12=46 ms to win, a significant value when thinking in terms of instant Zsh startup, which today is rather a melody of the past, with zsh-syntax-highlighting and zsh-autosuggestions overloading all $widgets entries during startup, in a loop.

I would go in direction of implementing new trivial parser that would read key-value pairs and put them to hash. It might even predict required size for 1562 _comps elements in the hash (it's x4 AFAIR, saw in addhashnode2), so that no expandhashtable() will be called. There would be .zcompdump_comps file with the pairs. Nothing will break, old .zcompdump will work.

Sebastian Gniazdowski
psprint /at/ zdharma.org

Messages sorted by: Reverse Date, Date, Thread, Author