[pacman-dev] pacman -Qs first-run performance
Philipp Überbacher
hollunder at lavabit.com
Thu Sep 1 14:22:25 EDT 2011
Excerpts from Dan McGee's message of 2011-09-01 19:54:34 +0200:
> On Thu, Sep 1, 2011 at 12:33 PM, Philipp <hollunder at lavabit.com> wrote:
> > Hi there,
> > pacman works very well for me, with one exception: searching for
> > installed packages. For me a -Ss takes about 5 seconds, a -Qs 25
> > seconds. I also noticed that this seems to be true for the first search
> > run only, the following ones are less than a second, no matter whether
> > it's a repo or local search. But if you want to just quickly find out a
> > thing, 25 seconds or so is a long time.
>
> Of course subsequent runs are faster- the page cache comes into play.
I didn't expect them to help this much.
> 25 seconds is beyond slow. I have some results from my slowest machine
> (HDDs are running using PIO4 right now, not even DMA), and I can't top
> 5 seconds on -Qs or 10 seconds on -Ss *with* caches dropped [1]. The
> first timing in each command is uncached; the second is then the
> cached time.
>
> > Anyway, I had a very brief look at the code and am far from
> > understanding it, but I think libalpm/db.c handles the search, the
> > package names, descriptions, etc. are stored in a linked list of
> > structs, the whole thing is cached in memory only and regex wizardry
> > is used for the search. If that's true, the bottleneck I experience is
> > the caching.
> >
> > 5 seconds for -Ss is acceptable for me, but I wonder whether there's a
> > reasonably easy way to improve the 25 seconds of -Qs.
> No, the bottleneck is your slow hard drive, your lack of RAM, and/or
> your slow CPU. Notice that on a cached search, my times don't exceed
> 0.3 seconds for any operation.
>
> > My knowledge of pacman internals is non-existant and my C skills are
> > minimal, so I don't think I can be a lot of help with this one, but
> > maybe there's something else I can be of help with instead.
> If you could tell us more about your setup, that would be very
> helpful. CPU info from /proc/cpuinfo, memory info from /proc/meminfo
> and `free -m` output, `hdparm -Tt` numbers, etc.
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 22
model name : Intel(R) Celeron(R) CPU 560 @ 2.13GHz
stepping : 1
cpu MHz : 2128.233
cache size : 1024 KB
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss tm pbe syscall nx lm constant_tsc up arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl tm2 ssse3 cx16 xtpr pdcm lahf_lm dts
bogomips : 4258.81
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
cat /proc/meminfo
MemTotal: 2050332 kB
MemFree: 964024 kB
Buffers: 17888 kB
Cached: 287340 kB
SwapCached: 10868 kB
Active: 487996 kB
Inactive: 276964 kB
Active(anon): 448104 kB
Inactive(anon): 176348 kB
Active(file): 39892 kB
Inactive(file): 100616 kB
Unevictable: 224952 kB
Mlocked: 130928 kB
SwapTotal: 2104508 kB
SwapFree: 2003180 kB
Dirty: 1432 kB
Writeback: 0 kB
AnonPages: 675560 kB
Mapped: 134016 kB
Shmem: 73536 kB
Slab: 46608 kB
SReclaimable: 14012 kB
SUnreclaim: 32596 kB
KernelStack: 1472 kB
PageTables: 8620 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3129672 kB
Committed_AS: 1367008 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 542528 kB
VmallocChunk: 34359167180 kB
HardwareCorrupted: 0 kB
AnonHugePages: 241664 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 13120 kB
DirectMap2M: 2074624 kB
free -m
total used free shared buffers cached
Mem: 2002 1080 922 0 17 299
-/+ buffers/cache: 763 1239
Swap: 2055 98 1956
hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 1426 MB in 2.00 seconds = 712.73 MB/sec
Timing buffered disk reads: 166 MB in 3.01 seconds = 55.09 MB/sec
> We've massively sped up pacman database reading for both hot and cold
> cases over what it was a year or so ago, so most of us consider this a
> "solved" problem by now. None of us see anywhere near the times you
> are seeing for operations, so anything you can provide us would be
> helpful.
>
> The output from these two commands might also be useful:
>
> echo 3 > /proc/sys/vm/drop_caches; /usr/bin/time pacman -Qs foobarbaz;
> /usr/bin/time pacman -Qs foobarbaz
> echo 3 > /proc/sys/vm/drop_caches; /usr/bin/time pacman -Ss foobarbaz;
> /usr/bin/time pacman -Ss foobarbaz
>
> -Dan
echo 3 > /proc/sys/vm/drop_caches; time pacman -Qs foobarbaz; time pacman -Qs foobarbaz
real 1m45.235s
user 0m0.127s
sys 0m0.590s
real 0m0.097s
user 0m0.043s
sys 0m0.023s
echo 3 > /proc/sys/vm/drop_caches; time pacman -Ss foobarbaz; time pacman -Ss foobarbaz
real 0m1.299s
user 0m0.200s
sys 0m0.023s
real 0m0.252s
user 0m0.200s
sys 0m0.010s
My 25 seconds weren't measured, it seems like it's even worse in reality.
Best regards,
Philipp
> [1]
> [root at dublin ~]# echo 3 > /proc/sys/vm/drop_caches; time pacman -Ss
> foobarbaz; time pacman -Ss foobarbaz
>
> real 0m5.533s
> user 0m0.360s
> sys 0m0.553s
>
> real 0m0.349s
> user 0m0.343s
> sys 0m0.007s
>
> [root at dublin ~]# echo 3 > /proc/sys/vm/drop_caches; time pacman -Qs
> foobarbaz; time pacman -Qs foobarbaz
>
> real 0m10.030s
> user 0m0.050s
> sys 0m0.290s
>
> real 0m0.063s
> user 0m0.040s
> sys 0m0.020s
>
More information about the pacman-dev
mailing list