On Tue, May 19, 2009 at 2:03 PM, Andrei Thorp <garoth@gmail.com> wrote:
(...) When a computer on the network asks for a file that's been downloaded previously, there is no need to go into the Internet.
Yes and no.
arch packages are not exactly small. I run a squid cache and a cache object size of 128KB serves me pretty well. To accomodate all arch packages, this setting has to go up to may be 150MB(for openoffice). If the cache start caching every object of size upto 150MB, it won't be as effective or will baloon dramatically. Not to mention the memory requirement that will go up too.
I'm under the impression that you can configure it in other ways and not just space, therefore letting it work for Arch packages (say, from your favourite mirrors) and not from everywhere. Yeah, it does increase the requirements, but I'm sure it's handleable.
But no doubt http access will be dramatically fast :)
Not to mention, squid is only http caching proxy, not ftp.
"Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more." -- their website.
squid is great but I doubt it can help with multiple computers with arch. It can handle only download caching but thats not enough.
(snip)
Yeah, some decent ideas there.
-AT
Another solution is to have all computers using only one pacman cache located on a single computer via nfs. So once a computer has downloaded a packages, all the other ones can grab it directly from the local network. If you have i686 and x86_64 computers, pacman can't differentiate between the two arches if the packages name doen't contain the arch (old pkg and community pkg). It reports a md5sum mismatch. You just need to say 'yes' to redownload the package when that happen. If you want to get rid of that problem, setup two caches: one for each arch.