[arch-dev-public] [PATCH] Support xz compressed packages
This simple patch allows us to slowly migrate to xz compressed packages. New packages have to be xz compressed while old ones may keep their current compression until they will be replaced by an update. The *.pkg.tar.* naming scheme has to be kept. I also removed the convert-to-any which we don't need. Signed-off-by: Pierre Schmitz <pierre@archlinux.de> --- config | 3 +- convert-to-any | 71 ------------------------------------------- cron-jobs/create-filelists | 2 +- cron-jobs/sourceballs | 4 +- db-move | 4 +- misc-scripts/ftpdir-cleanup | 7 ++-- 6 files changed, 10 insertions(+), 81 deletions(-) delete mode 100755 convert-to-any diff --git a/config b/config index 92def37..7132d0b 100644 --- a/config +++ b/config @@ -12,7 +12,8 @@ TMPDIR="/srv/tmp" ARCHES=(i686 x86_64) BUILDSCRIPT="PKGBUILD" DBEXT=".db.tar.gz" -PKGEXT=".pkg.tar.gz" +# has to match .pkg.tar.* +PKGEXT=".pkg.tar.xz" SRCEXT=".src.tar.gz" # Allowed licenses: get sourceballs only for licenses in this array diff --git a/convert-to-any b/convert-to-any deleted file mode 100755 index 53d1a7b..0000000 --- a/convert-to-any +++ /dev/null @@ -1,71 +0,0 @@ -#!/bin/bash -# -# Converts an existing architecture-independent package -# for i686 or x86_64 into a package with "arch = any" -# - -# -- Abhishek Dasgupta <abhidg@gmail.com> - -[ "$UID" = "" ] && UID=$(uid) -OUTDIR="$(pwd)" -WORKDIR="/tmp/convert-to-any.$UID" - -if [ $# -ne 1 ]; then - echo "Syntax: $(basename $0) <package-file>" - exit 1 -fi - -. "$(dirname $0)/db-functions" -. "$(dirname $0)/config" - -cleanup() { - trap '' 0 2 - rm -rf "$WORKDIR" - [ "$1" ] && exit $1 -} - -ctrl_c() { - echo "Interrupted" >&2 - cleanup 0 -} - -die() { - echo "$*" >&2 - cleanup 1 -} - -mkdir -p "$WORKDIR/build" - -oldpkgname="$1" - -if [ -z "$oldpkgname" ]; then - die "convert-to-any: which package to convert?" -fi - -pkg="$(basename $oldpkgname)" -newpkgname=$(echo $pkg | sed "s/-\(i686\|x86_64\)$PKGEXT/-any$PKGEXT/") - -if ! cp "$oldpkgname" "$WORKDIR/build/$pkg"; then - die "convert-to-any: failed to copy package to $WORKDIR" -fi -pushd "$WORKDIR/build" >/dev/null - -# Conversion of i686 package into "any" package. -mkdir -p package -if ! fakeroot bsdtar xf "$pkg" -C package; then - die "convert-to-any: error in extracting $oldpkgname" -fi - -sed -i "s/arch = \(i686\|x86_64\)/arch = any/g" package/.PKGINFO -pushd package >/dev/null -case "$newpkgname" in - *tar.gz) TAR_OPT="z" ;; - *tar.bz2) TAR_OPT="j" ;; - *tar.xz) TAR_OPT="J" ;; - *) die "$newpkgname does not have a valid archive extension." ;; -esac -fakeroot bsdtar c${TAR_OPT}f "$OUTDIR/$newpkgname" .PKGINFO * -popd >/dev/null - -popd >/dev/null -cleanup diff --git a/cron-jobs/create-filelists b/cron-jobs/create-filelists index c9d7db9..62e72c1 100755 --- a/cron-jobs/create-filelists +++ b/cron-jobs/create-filelists @@ -44,7 +44,7 @@ for repo in $repos; do fi # create file lists - for pkg in $repodir/*${PKGEXT}; do + for pkg in $repodir/*.pkg.tar.*; do pkgname="$(getpkgname "$pkg")" pkgver="$(getpkgver "$pkg")" tmppkgdir="${TMPDIR}/${repodir}/${pkgname}-${pkgver}" diff --git a/cron-jobs/sourceballs b/cron-jobs/sourceballs index b7a4885..f08d349 100755 --- a/cron-jobs/sourceballs +++ b/cron-jobs/sourceballs @@ -47,11 +47,11 @@ for repo in $repos; do continue fi cd $ftppath - for pkg in *$PKGEXT; do + for pkg in *.pkg.tar.*; do [ -f "$pkg" ] || continue pkgbase=$(getpkgbase $pkg) srcpath="$srcbase/" - srcpkg="${pkg//$PKGEXT/$SRCEXT}" + srcpkg="${pkg//.pkg.tar.*/$SRCEXT}" srcpkg="${srcpkg//-$arch/}" srcpkgname="${srcpkg%-*-*$SRCEXT}" srcpkgbase="${srcpkg/$srcpkgname/$pkgbase}" diff --git a/db-move b/db-move index efd54e0..3539a47 100755 --- a/db-move +++ b/db-move @@ -58,7 +58,7 @@ if [ -d "$packagebase/repos/$svnrepo_from" ]; then . "$packagebase/repos/$svnrepo_from/$BUILDSCRIPT" for i in ${pkgname[@]}; do - _pkgfile="$i-$pkgver-$pkgrel-$_arch$PKGEXT" + _pkgfile="$i-$pkgver-$pkgrel-$_arch.pkg.tar.*" if [ ! -f "$ftppath_from/${_arch}/$_pkgfile" ]; then die "error: package file '$_pkgfile' not found in repo '$repofrom'" fi @@ -107,7 +107,7 @@ if [ -d "$packagebase/repos/$svnrepo_from" ]; then #use '*' to move the old DB too mv $repoto$DBEXT* $ftppath_to/$architecture for i in ${pkgname[@]}; do - _pkgfile="$i-$pkgver-$pkgrel-$_arch$PKGEXT" + _pkgfile="$i-$pkgver-$pkgrel-$_arch.pkg.tar.*" if [ "${_arch}" == "any" ]; then mv ${_pkgfile} $ftppath_to/any ln -s ../any/${_pkgfile} $ftppath_to/$architecture/ diff --git a/misc-scripts/ftpdir-cleanup b/misc-scripts/ftpdir-cleanup index f0f89a3..62eec9f 100755 --- a/misc-scripts/ftpdir-cleanup +++ b/misc-scripts/ftpdir-cleanup @@ -51,7 +51,6 @@ for arch in ${ARCHES[@]}; do for pkg in $TMPDIR/*; do filename=$(grep -A1 '^%FILENAME%$' "${pkg}/desc" | tail -n1) - [ -z "${filename}" ] && filename="${pkg}${PKGEXT}" if [ ! -e "${filename}" ]; then MISSINGFILES="${MISSINGFILES} ${filename}" @@ -69,7 +68,7 @@ for arch in ${ARCHES[@]}; do fi done - for pkg in *$PKGEXT; do + for pkg in *.pkg.tar.*; do if [ ! -e "$pkg" ]; then continue fi @@ -161,8 +160,8 @@ ARCHINDEPFILES="" if [ -d "$ftppath_base/any" ]; then cd "$ftppath_base/any" - for pkg in *$PKGEXT; do - [ -f "$pkg" ] || continue # in case we get a file named "*.pkg.tar.gz" + for pkg in *.pkg.tar.*; do + [ -f "$pkg" ] || continue # in case we get a file named "*.pkg.tar.*" found=0 #check for any existing symlinks for arch in ${ARCHES[@]}; do -- 1.6.6.1 Pierre Schmitz, https://users.archlinux.de/~pierre
Am Montag, 15. Februar 2010 15:04:28 schrieb Pierre Schmitz:
This simple patch allows us to slowly migrate to xz compressed packages.
Yes, it's really that easy when we assume that all new packages have to be xz compressed. So once this is applied and online everybody has to set the new PKGEXT in his makepkg.conf I didn't alter the db file compression as this would obiously everything as this is hard coded in pacman. Compressing the sources with xz (or compressing them at all) is not worth it as they usually include already compressed files from upstream. pacman and devtools are already prepared for handling xz compression. -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am Montag, 15. Februar 2010 15:12:15 schrieb Pierre Schmitz:
pacman and devtools are already prepared for handling xz compression.
namcap does not support xz. But it should be as easy as replacing the python internal tar class by a call of bsdtar. Anyone wants to write a patch? -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am 15.02.2010 15:12, schrieb Pierre Schmitz:
Am Montag, 15. Februar 2010 15:04:28 schrieb Pierre Schmitz:
This simple patch allows us to slowly migrate to xz compressed packages.
Yes, it's really that easy when we assume that all new packages have to be xz compressed. So once this is applied and online everybody has to set the new PKGEXT in his makepkg.conf
I didn't alter the db file compression as this would obiously everything as this is hard coded in pacman. Compressing the sources with xz (or compressing them at all) is not worth it as they usually include already compressed files from upstream.
I am still against implementing a solution that is that unflexible. Where's the problem in just allowing any (known) compression?
Am Montag, 15. Februar 2010 16:57:17 schrieb Thomas Bächler:
I am still against implementing a solution that is that unflexible. Where's the problem in just allowing any (known) compression?
Write a patch. -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am 15.02.2010 17:09, schrieb Pierre Schmitz:
Am Montag, 15. Februar 2010 16:57:17 schrieb Thomas Bächler:
I am still against implementing a solution that is that unflexible. Where's the problem in just allowing any (known) compression?
Write a patch.
I thought you wanted to cut down on packaging to do only this, so I guessed you would invest the time to do it right, and not do half-baked half-broken solutions.
Am Montag, 15. Februar 2010 17:21:53 schrieb Thomas Bächler:
I thought you wanted to cut down on packaging to do only this, so I guessed you would invest the time to do it right, and not do half-baked half-broken solutions.
Seriously, what's the point in putting efforts into flexibility if you don't need it. Or if you have one reason why it should be useful to allow every package his favorite compressions format I'll look into it. It not htat hard to do, but makes the code more complex than needed. -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am Montag, 15. Februar 2010 17:27:51 schrieb Pierre Schmitz:
if you have one reason why it should be useful
I found one myself. pacman, libarchive and xz-utils should stay in gz format for a while to make it possible to update old machines which haven't been updated for long. BUT: for these rare cases we can simply use repo-add etc. directly. -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am Mon, 15 Feb 2010 17:47:53 +0100 schrieb Pierre Schmitz <pierre@archlinux.de>:
Am Montag, 15. Februar 2010 17:27:51 schrieb Pierre Schmitz:
if you have one reason why it should be useful
I found one myself. pacman, libarchive and xz-utils should stay in gz format for a while to make it possible to update old machines which haven't been updated for long.
BUT: for these rare cases we can simply use repo-add etc. directly.
any idea how long it takes to compress OOo main package? as far as I know it needs a lot more time than gz/bz compression. does xz make use of SMP cpus? if not it's not an improvement and we should think about pbzip that now supports tar interaction and pipes! -Andy
On Mon, 2010-02-15 at 19:32 +0100, Andreas Radke wrote:
any idea how long it takes to compress OOo main package? as far as I know it needs a lot more time than gz/bz compression. does xz make use of SMP cpus?
if not it's not an improvement and we should think about pbzip that now supports tar interaction and pipes!
Depends on the compression level. It's slower than gzip compression, but faster than bzip2 if you don't select the highest compression level. I guess you'll save more time uploading an xz-compressed openoffice package than you'll lose by compressing it with that.
Am Montag, 15. Februar 2010 19:36:41 schrieb Jan de Groot:
if not it's not an improvement and we should think about pbzip that now supports tar interaction and pipes!
Depends on the compression level. It's slower than gzip compression, but faster than bzip2 if you don't select the highest compression level. I guess you'll save more time uploading an xz-compressed openoffice package than you'll lose by compressing it with that.
Indeed. E.g. I just compiled Qt and package size went down from 34 MB to 25 MB. This saves me about 10 MByte upload for each arch. I didn't note any massive increase in compression time compared to compile and upload time. Sure there are edge cases like big icon packs which wont compress well anyway. But in general the benefit of the way smaller packages is rally worth it. And you don't have to forget that you only compress a package once; but it's uploaded and downloaded many, many times. -- Pierre Schmitz, https://users.archlinux.de/~pierre
On Mon, 2010-02-15 at 22:15 +0100, Pierre Schmitz wrote:
Am Montag, 15. Februar 2010 19:36:41 schrieb Jan de Groot:
if not it's not an improvement and we should think about pbzip that now supports tar interaction and pipes!
Depends on the compression level. It's slower than gzip compression, but faster than bzip2 if you don't select the highest compression level. I guess you'll save more time uploading an xz-compressed openoffice package than you'll lose by compressing it with that.
Indeed. E.g. I just compiled Qt and package size went down from 34 MB to 25 MB. This saves me about 10 MByte upload for each arch. I didn't note any massive increase in compression time compared to compile and upload time.
Sure there are edge cases like big icon packs which wont compress well anyway. But in general the benefit of the way smaller packages is rally worth it. And you don't have to forget that you only compress a package once; but it's uploaded and downloaded many, many times.
What's the compression rate used in your test? I've seen benchmarks for an older version of xz-utils (I think it was called lzma-utils). In that benchmark the most simple compression level was faster and smaller than gzip -9, and bzip2 was outperformed anyways. Note that tighter compressions needs a bigger dictionary size when unpacking, which can be crap on low-memory systems.
Am Dienstag, 16. Februar 2010 08:51:50 schrieb Jan de Groot:
What's the compression rate used in your test? I've seen benchmarks for an older version of xz-utils (I think it was called lzma-utils). In that benchmark the most simple compression level was faster and smaller than gzip -9, and bzip2 was outperformed anyways.
I just used the default which should be -6.
Note that tighter compressions needs a bigger dictionary size when unpacking, which can be crap on low-memory systems.
Do you have numbers? We could do some testing in qemu with e.g. 128MB RAM etc.. Or maybe have a look at Slackware who are already using xz for their whole repo. According to them you need at least a 486 and 64MB RAM. Does not look like anything we should worry about. -- Pierre Schmitz, https://users.archlinux.de/~pierre
On Tue, 2010-02-16 at 11:21 +0100, Pierre Schmitz wrote:
Am Dienstag, 16. Februar 2010 08:51:50 schrieb Jan de Groot:
What's the compression rate used in your test? I've seen benchmarks for an older version of xz-utils (I think it was called lzma-utils). In that benchmark the most simple compression level was faster and smaller than gzip -9, and bzip2 was outperformed anyways.
I just used the default which should be -6.
Note that tighter compressions needs a bigger dictionary size when unpacking, which can be crap on low-memory systems.
Do you have numbers? We could do some testing in qemu with e.g. 128MB RAM etc.. Or maybe have a look at Slackware who are already using xz for their whole repo. According to them you need at least a 486 and 64MB RAM. Does not look like anything we should worry about.
Did some testing with openoffice-base 3.2.0-1-x86_64.tar: compression speed: gzip: 0m28.945s bzip2: 1m21.876s xz -1: 0m49.244s xz -2: 1m18.444s xz -3: 3m34.208s xz -6: 4m41.148s decompression speed: gzip: 0m 5.772s bzip2: 0m29.433s xz -1: 0m13.983s xz -2: 0m12.949s xz -3: 0m12.706s sz -6: 0m11.462s size: tar: 370728960 gzip: 173262975 bzip2: 165765469 xz -1: 157099460 xz -2: 150147496 xz -3: 142961984 xz -6: 129979708 For decompression, it doesn't matter so much which xz level you choose. For compression, anything beyond -2 is painfully slow. These times are measured on a Core2Duo E4500 by compressing the single tar file. Note that -6 saves a whopping 20MB over compressing with -2, but whatever we choose is always better than gzip or bzip2.
Am Dienstag, 16. Februar 2010 12:03:09 schrieb Jan de Groot:
anything beyond -2 is painfully slow.
still think it wont hurt to go with the default compression rate though. At least for now; we don't have to change makepkg to pass a compression rate. And if we add the time for uploading the data most of us will still win. And I just assume that for most packages compression time doesn't matter. The packages are either small, or if they are big you need a long time to compile or to upload. For users -6 is clearly the best. And we have hopefully more users than packagers ;-) -- Pierre Schmitz, https://users.archlinux.de/~pierre
On 16/02/10 21:03, Jan de Groot wrote:
On Tue, 2010-02-16 at 11:21 +0100, Pierre Schmitz wrote:
Am Dienstag, 16. Februar 2010 08:51:50 schrieb Jan de Groot:
What's the compression rate used in your test? I've seen benchmarks for an older version of xz-utils (I think it was called lzma-utils). In that benchmark the most simple compression level was faster and smaller than gzip -9, and bzip2 was outperformed anyways.
I just used the default which should be -6.
Note that tighter compressions needs a bigger dictionary size when unpacking, which can be crap on low-memory systems.
Do you have numbers? We could do some testing in qemu with e.g. 128MB RAM etc.. Or maybe have a look at Slackware who are already using xz for their whole repo. According to them you need at least a 486 and 64MB RAM. Does not look like anything we should worry about.
Did some testing with openoffice-base 3.2.0-1-x86_64.tar: compression speed: gzip: 0m28.945s bzip2: 1m21.876s xz -1: 0m49.244s xz -2: 1m18.444s xz -3: 3m34.208s xz -6: 4m41.148s
decompression speed: gzip: 0m 5.772s bzip2: 0m29.433s xz -1: 0m13.983s xz -2: 0m12.949s xz -3: 0m12.706s sz -6: 0m11.462s
Is that right? Decompression gets faster with higher compression ratio?
On Tue, 2010-02-16 at 21:54 +1000, Allan McRae wrote:
Is that right? Decompression gets faster with higher compression ratio?
Sure, why not? With xz -6 you only need to read and process 124MB, with -1 you have to read 150MB. The decompression algorithm is the same for both ratios, only change is the archive size and the dictionary used. The higher the ratio, the bigger the dictionary becomes and the more memory you'll need for decompression.
Am 16.02.2010 12:03, schrieb Jan de Groot:
Did some testing with openoffice-base 3.2.0-1-x86_64.tar: compression speed: gzip: 0m28.945s bzip2: 1m21.876s xz -1: 0m49.244s xz -2: 1m18.444s xz -3: 3m34.208s xz -6: 4m41.148s
decompression speed: gzip: 0m 5.772s bzip2: 0m29.433s xz -1: 0m13.983s xz -2: 0m12.949s xz -3: 0m12.706s sz -6: 0m11.462s
size: tar: 370728960 gzip: 173262975 bzip2: 165765469 xz -1: 157099460 xz -2: 150147496 xz -3: 142961984 xz -6: 129979708
For decompression, it doesn't matter so much which xz level you choose. For compression, anything beyond -2 is painfully slow. These times are measured on a Core2Duo E4500 by compressing the single tar file. Note that -6 saves a whopping 20MB over compressing with -2, but whatever we choose is always better than gzip or bzip2.
We should measure the resident memory usage during decompression because IIRC that is a problem.
Am Dienstag, 16. Februar 2010 13:17:18 schrieb Thomas Bächler:
We should measure the resident memory usage during decompression because IIRC that is a problem.
Just for the record: I did a test in a VM and the result was that we don't need to worry about it. 64MB are more than enough to decompress or even compress the openoffice package (also tested install with pacman) Even with 32MB there are no problems. But if you want to compress with such low RAM you have to tell xz to use more than 40% of RAM (which is the default maximum it will consume). -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am Montag, 15. Februar 2010 19:32:32 schrieb Andreas Radke:
any idea how long it takes to compress OOo main package? as far as I know it needs a lot more time than gz/bz compression. does xz make use of SMP cpus?
if not it's not an improvement and we should think about pbzip that now supports tar interaction and pipes!
SMP will be supported in the next version of xz. And yes, it takes a long time to compress openoffice. On my old PC it took 4 minutes to recompress. But package size went down from 166MB to 124MB. So, unless you can upload with more than 316 KByte/s (I hope I got that right ;-)) you are still faster. (and I know that your PC is a lot faster than mine) So even if you only take your own time into account you still win. Adding the time/traffic decrease of all mirrors and users to our calculation we will clearly win. -- Pierre Schmitz, https://users.archlinux.de/~pierre
On Mon, Feb 15, 2010 at 10:27 AM, Pierre Schmitz <pierre@archlinux.de> wrote:
Am Montag, 15. Februar 2010 17:21:53 schrieb Thomas Bächler:
I thought you wanted to cut down on packaging to do only this, so I guessed you would invest the time to do it right, and not do half-baked half-broken solutions.
Seriously, what's the point in putting efforts into flexibility if you don't need it. Or if you have one reason why it should be useful to allow every package his favorite compressions format I'll look into it. It not htat hard to do, but makes the code more complex than needed.
I 100% agree with Thomas. Saying we "won't ever need it" is assuming that xz is the ultimate compression algorithm and nothing will ever be better. A year from now "zz compression" might come out and be awesome. Now we'd have to repeat this entire process for the zz algorithm. Being flexible isn't about supporting old stuff - it's about supporting new.
Am Montag, 15. Februar 2010 19:26:59 schrieb Aaron Griffin:
I 100% agree with Thomas. Saying we "won't ever need it" is assuming that xz is the ultimate compression algorithm and nothing will ever be better. A year from now "zz compression" might come out and be awesome. Now we'd have to repeat this entire process for the zz algorithm.
Being flexible isn't about supporting old stuff - it's about supporting new.
I don't get it. I am not talking about making it impossible to add support for this for ever. It just about implementing only that what we need right now. Well, I can just place *.pkg.tar.* everywhere, but the point I didn't want it that much is that it is not safe for the case where this pattern might match more than one file. So this patch was the most easiest and less intrusive approach I could think of. I know that we should rewrite the whole stuff in future. But this way we can start using xz right now and we wont loose any flexibility in future. So, why don't we add this and use it for a while instead? We can alter the scripts anytime later if needed. Another question: Do we really want to have all kinds of random compressions for packages in our repo or should we agree on one towards which we migrate step by step? This wont be a problem if we had a dedicated file name extension but this way everybody will just be confused. -- Pierre Schmitz, https://users.archlinux.de/~pierre
On Mon, Feb 15, 2010 at 3:10 PM, Pierre Schmitz <pierre@archlinux.de> wrote:
Am Montag, 15. Februar 2010 19:26:59 schrieb Aaron Griffin:
I 100% agree with Thomas. Saying we "won't ever need it" is assuming that xz is the ultimate compression algorithm and nothing will ever be better. A year from now "zz compression" might come out and be awesome. Now we'd have to repeat this entire process for the zz algorithm.
Being flexible isn't about supporting old stuff - it's about supporting new.
I don't get it. I am not talking about making it impossible to add support for this for ever. It just about implementing only that what we need right now.
Well, I can just place *.pkg.tar.* everywhere, but the point I didn't want it that much is that it is not safe for the case where this pattern might match more than one file. So this patch was the most easiest and less intrusive approach I could think of. I know that we should rewrite the whole stuff in future. But this way we can start using xz right now and we wont loose any flexibility in future.
So, why don't we add this and use it for a while instead? We can alter the scripts anytime later if needed.
Another question: Do we really want to have all kinds of random compressions for packages in our repo or should we agree on one towards which we migrate step by step? This wont be a problem if we had a dedicated file name extension but this way everybody will just be confused.
My big concern here is that this is a much bigger question than it seems. "Here's a patch that makes us switch to xz compression" isn't the same as "Do you guys think we should switch to xz compression completely?". It seems to be jumping the gun a little bit, but I could be mistaken So... let's start from there: Is everyone ok with ONLY allowing xz compressed packages into the repo going forward? gz packages will no longer work with this change. I, for one, don't have an issue with "all kinds of random compressions" because there's really only like 4 or 5 we'd use. And it's transparent to everyone who is not looking at a list of files. As for actual changes to support that, setting PKGEXT=".pkg.tar.*" would require only minor changes to these lines: $ grep -rn PKGEXT * cron-jobs/sourceballs:54: srcpkg="${pkg//$PKGEXT/$SRCEXT}" db-move:61: _pkgfile="$i-$pkgver-$pkgrel-$_arch$PKGEXT" db-move:103: _pkgfile="$i-$pkgver-$pkgrel-$_arch$PKGEXT" db-move:110: _pkgfile="$i-$pkgver-$pkgrel-$_arch$PKGEXT" db-update:118: if [ "$_pkgfile" = "$_pkgname-$pkgver-$pkgrel-any$PKGEXT" ]; then db-update:189: if [ "$_pkgfile" = "$_pkgname-$pkgver-$pkgrel-$current_arch$PKGEXT" ]; then misc-scripts/ftpdir-cleanup:54: [ -z "${filename}" ] && filename="${pkg}${PKGEXT}" Note that I think the sourceballs line might actually work...
Am Montag, 15. Februar 2010 22:33:07 schrieb Aaron Griffin:
My big concern here is that this is a much bigger question than it seems. "Here's a patch that makes us switch to xz compression" isn't the same as "Do you guys think we should switch to xz compression completely?". It seems to be jumping the gun a little bit, but I could be mistaken
I assumed that we already agreed on that. But I really don't care about this to discuss this to death. If we think we should support several compression types right now I can add it. Though I still think it's easier to just agree on one. (which doesn't mean that we can switch again in 2030 when a smart guy discovers an even more efficient algorithm ;-)) -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am 15.02.2010 22:10, schrieb Pierre Schmitz:
Well, I can just place *.pkg.tar.* everywhere, but the point I didn't want it that much is that it is not safe for the case where this pattern might match more than one file.
I agree, it is unsafe and shouldn't be done anywhere. Instead of doing that, you can abstract it into a function that will do all kinds of safety checks before returning one file name, or a list of file names. This function then should be safe to use everywhere. As I said earlier: If we do this right once, right now, we never have to care about it again and always just upload and submit whatever compression format we want in the future.
Am Montag, 15. Februar 2010 22:45:52 schrieb Thomas Bächler:
Instead of doing that, you can abstract it into a function that will do all kinds of safety checks before returning one file name, or a list of file names. This function then should be safe to use everywhere.
That was my initial plan until I thought that we wont need this complexity if we would migrate to only one compression method anyway. But sure, if everybody thinks that we need multiple compression methods at the same time, let's forget about the patch and I'll write a new one. -- Pierre Schmitz, https://users.archlinux.de/~pierre
On Mon, Feb 15, 2010 at 4:01 PM, Pierre Schmitz <pierre@archlinux.de> wrote:
But sure, if everybody thinks that we need multiple compression methods at the same time, let's forget about the patch and I'll write a new one.
I don't think anyone is suggesting that. It's just that the patch was sent to the ML, apparently for some sort of code review. That's all I (we?) are suggesting: there is room for improvement <here>. I can think of a handful of cases where switching to "one true compression algorithm" will bite us in the ass, and thus think it needs to be more flexible. I'm not suggesting "I want to do crazy shit with compression"
Am Montag, 15. Februar 2010 23:06:26 schrieb Aaron Griffin:
It's just that the patch was sent to the ML, apparently for some sort of code review. That's all I (we?) are suggesting: there is room for improvement <here>.
Sure, and I appreciate your input. I also get your arguments. And I don't think that this patch is the best approach ever; it really depends on what we need and I guess my assumption was not quite correct here.
I can think of a handful of cases where switching to "one true compression algorithm" will bite us in the ass, and thus think it needs to be more flexible. I'm not suggesting "I want to do crazy shit with compression"
Let's make a deal: I'll create a new branch in my repo and see what it takes to make the package handling more general. (it will probably take less code than we already put into this discussion ;-)) -- Pierre Schmitz, https://users.archlinux.de/~pierre
Am 15.02.2010 23:33, schrieb Pierre Schmitz:
Let's make a deal: I'll create a new branch in my repo and see what it takes to make the package handling more general. (it will probably take less code than we already put into this discussion ;-))
That's not the point. We can discuss half a year to get two lines of code right and I'll be fine with it.
On Mon, Feb 15, 2010 at 8:04 AM, Pierre Schmitz <pierre@archlinux.de> wrote:
convert-to-any | 71 ------------------------------------------- delete mode 100755 convert-to-any
Is this part of the patch, or an accident? Seems like it should be a separate commit, no?
Am Montag, 15. Februar 2010 19:28:14 schrieb Aaron Griffin:
Is this part of the patch, or an accident? Seems like it should be a separate commit, no?
You could remove it on a separate commit if you like. According to Allan we don't need this scripts, so I removed it instead of checking what has to be adjusted to support xz. -- Pierre Schmitz, https://users.archlinux.de/~pierre
participants (6)
-
Aaron Griffin
-
Allan McRae
-
Andreas Radke
-
Jan de Groot
-
Pierre Schmitz
-
Thomas Bächler