On 06/03/14 10:14, Matthias Krüger wrote:
On 03/06/2014 12:33 AM, Allan McRae wrote:
On 06/03/14 09:25, Matthias Krüger wrote:
Looking how pkgdelta works, I found this line xdelta3 -q -f -s "$oldfile" "$newfile" "$deltafile" || ret=$? which seemed to be responsible for the actual delta generation, however man xdelta3 revealed that there were different compression levels (0-9) (not sure which one is default). To make it short, we could have had smaller deltas since pkgdelta was introduced!
Examples:
-9 16660K blender-12:2.69.c7ac0e-1_to_13:2.69.13290d-1-x86_64.delta default 17832K blender-12:2.69.c7ac0e-1_to_13:2.69.13290d-1-x86_64.delta
-9 504K xbmc-12.3-10_to_12.3-11-x86_64.delta default 572K xbmc-12.3-10_to_12.3-11-x86_64.delta
How is memory usage changed? Mainly when regenerating the package from deltas? Surprisingly, for blender both runs took ~96MB and 1:50m (+- a second).
I'm assuming that is because that most of the time/memory is used applying the diff rather than decompressing it given it is quite small. I know memory usage and speed were issues when it was considered adding -9 to package compression. My concern is this also applies to deltas. However, given the reconstruction of a delta is reasonably computationally intensive, I'm not sure this restriction applies here. Anyone else care to comment? A