[arch-general] Suggestion: switch to zstd -19 for compressing packages over xz
Dan Sommers
2QdxY4RzWzUUiLuE at potatochowder.com
Sat Mar 16 12:01:40 UTC 2019
On 3/16/19 1:30 AM, Adam Fontenot via arch-general wrote:
> On Fri, Mar 15, 2019 at 11:10 PM Darren Wu via arch-general
> <arch-general at archlinux.org> wrote:
>>
>> I have 4Mbps (512KBytes/s) 'broad'band and i7-6500U CPU. I wanna cry.
>>
>
> Even in a worst-case scenario like this one, the Squash compression
> test I linked to shows an increase from 26 secs (xz -6) to 29 secs
> (zstd -19), so you wouldn't be significantly impacted by this change.
>
> I hope your local authorities decide to give you real broadband in the
> near future, however. :-)
My situation is similar to Darren's: My primary connection
to the internet is through my cell phone carrier and a
mobile WiFi hot spot. In urban areas, I can get as much as
50 megabits per second, but presently, due to my remote
location, it's around 5 or 6. I also have a monthly data
cap, which I share with my wife, and only WiFi (i.e., no
wires; that nice 300 megabits from hot spot to device is
shared by all devices, and there's a per device limit,
too). FWIW, I have an i7-7700HQ CPU.
In the old days (when large files were a megabyte or two
and network bandwidth was measured in kilibits per second),
we assumed that the network was the bottleneck. I think
what Adam is propsing is that things are different now, and
that the CPU is the bottleneck. As always, it depends. :-)
My vote, whether it has any weight or not, is for higher
compression ratios at the expense of CPU cycles when
decompressing; i.e., xz rather than zstd. Also, consider
that the 10% increase in archive size is suffered repeatedly
as servers store and propagate new releases, but that the
increase in decompression time is only suffered by the
end user once, likely during a manual update operation or an
automated background process, where it doesn't matter much.
I used to have this argument with coworkers over build times
and wake-from-sleep times. Is the extra time to decompress
archives really killing anyone's productivity? Are users
choosing OS distros based on how long it takes do install
Open Office? Are Darren and I dinosaurs, doomed to live in
a world where everyone else has a multi-gigabit per second
internet connection and a cell phone class CPU?
Jokingly, but not as much as you think,
Dan
More information about the arch-general
mailing list