Dan McGee schrieb:
Realize that this has drawbacks; someone that is fetching (not cloning) over HTTP will have to redownload the whole pack again and not just the incremental changeset. You may want something more like the included script as it gives you the benefits of compressing objects but not creating one huge pack.
-Dan
$ cat bin/prunerepos #!/bin/sh
cwd=$(pwd)
for dir in $(ls | grep -F '.git'); do cd $cwd/$dir echo "pruning and packing $cwd/$dir..." git prune git repack -d done
I realize that, is it something we should be really concerned about? With our small repositories, the overhead of downloading a bunch of small files might even outweigh the size of a big pack. pacman.git is our biggest and currently has a 5.4MB pack when you gc it. Or maybe we should prune && repack them weekly, but gc them monthly or every 2 months? Last week, we had http access to http://projects.archlinux.org/git/ (not counting 403s and 404s) from 12 different IPs, 66 the week before that, then 63 and 84. I hope most people use git://.