[arch-general] Building local repo - eliminating dups - why some new x86_64?

Andrei Thorp garoth at gmail.com
Tue May 19 14:03:10 EDT 2009

>> (...) When a computer on the network asks for a file
>> that's been downloaded previously, there is no need to go into the
>> Internet.
> Yes and no.
> arch packages are not exactly small. I run a squid cache and a cache object
> size of 128KB serves me pretty well. To accomodate all arch packages, this
> setting has to go up    to may be 150MB(for openoffice). If the cache start
> caching every object of size upto 150MB, it won't be as effective or will
> baloon dramatically. Not to mention the memory requirement that will go up
> too.

I'm under the impression that you can configure it in other ways and
not just space, therefore letting it work for Arch packages (say, from
your favourite mirrors) and not from everywhere. Yeah, it does
increase the requirements, but I'm sure it's handleable.

> But no doubt http access will be dramatically fast :)
> Not to mention, squid is only http caching proxy, not ftp.

"Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and
more." -- their website.

> squid is great but I doubt it can help with multiple computers with arch. It
> can handle only download caching but thats not enough.
> (snip)

Yeah, some decent ideas there.


More information about the arch-general mailing list