[aur-general] Upgraded AUR to 1.8.0
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1]. Please report any issues on the AUR bug tracker [2]. [1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
On Mon, Feb 21, 2011 at 10:47 AM, Lukas Fleischer <archlinux@cryptocrack.de>wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
First congrats for all the improvements. This leads me to two questions: - It appears the "out of date" date is set to the same value for all packages marked out of date. I assume this is the intended behavior as the info was not stored anywhere, right? - What about the "submitter"? Was this info always known or is there some default in some cases? The thing is I have one package [1] I maintain where the submitter (Allan) is not credited. Should I consider this info as accurate and add the submitter to the contributor entries? [1] http://aur.archlinux.org/packages.php?ID=22682 -- Cédric Girard
On Mon, Feb 21, 2011 at 11:02:48AM +0100, Cédric Girard wrote:
First congrats for all the improvements. This leads me to two questions:
- It appears the "out of date" date is set to the same value for all packages marked out of date. I assume this is the intended behavior as the info was not stored anywhere, right?
Correct. All packages that were flagged during the upgrade have a timestamp of "Mon, 21 Feb 2011 09:23:39 +0000" (that's the moment I ran the database upgrade script).
- What about the "submitter"? Was this info always known or is there some default in some cases? The thing is I have one package [1] I maintain where the submitter (Allan) is not credited. Should I consider this info as accurate and add the submitter to the contributor entries?
The submitter is the person who initially uploaded a package to the AUR. That should be the first maintainer/contributor in most cases, but might also be some Arch dev/TU who moved a package from [core], [extra] or [community] to the AUR during some cleanup or whatever.
On 21/02/11 20:02, Cédric Girard wrote:
- What about the "submitter"? Was this info always known or is there some default in some cases? The thing is I have one package [1] I maintain where the submitter (Allan) is not credited. Should I consider this info as accurate and add the submitter to the contributor entries?
If I submitted that it was only because it was dropped from the repos to the AUR. Allan
On Mon, Feb 21, 2011 at 11:29 AM, Allan McRae <allan@archlinux.org> wrote:
On 21/02/11 20:02, Cédric Girard wrote:
- What about the "submitter"? Was this info always known or is there some default in some cases? The thing is I have one package [1] I maintain where the submitter (Allan) is not credited. Should I consider this info as accurate and add the submitter to the contributor entries?
If I submitted that it was only because it was dropped from the repos to the AUR.
Allan
OK. Thanks. This is kind of what I thought. -- Cédric Girard
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful. Dieter
On Mon, Feb 21, 2011 at 11:08:05AM +0100, Dieter Plaetinck wrote:
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
There were several vulnerabilities with the automatic tarball extraction. Think of "tarballs bombs" (as in "ZIP bombs"). Think of what happens when a source tarball that contains a symlink to "/etc/passwd" is uploaded (and the web server isn't chrooted). Just to give two simple samples. Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend. Generally, everyone should download and use the tarballs to build packages. The PKGBUILD preview is retained due to several requests.
On Mon, Feb 21, 2011 at 11:37 AM, Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 11:08:05AM +0100, Dieter Plaetinck wrote:
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
There were several vulnerabilities with the automatic tarball extraction. Think of "tarballs bombs" (as in "ZIP bombs"). Think of what happens when a source tarball that contains a symlink to "/etc/passwd" is uploaded (and the web server isn't chrooted). Just to give two simple samples.
Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend. Generally, everyone should download and use the tarballs to build packages. The PKGBUILD preview is retained due to several requests.
Thanks for information and work! -- Sébastien Luttringer www.seblu.net
On Mon, 21 Feb 2011 11:37:18 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 11:08:05AM +0100, Dieter Plaetinck wrote:
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
There were several vulnerabilities with the automatic tarball extraction. Think of "tarballs bombs" (as in "ZIP bombs"). Think of what happens when a source tarball that contains a symlink to "/etc/passwd" is uploaded (and the web server isn't chrooted). Just to give two simple samples.
Hmm.. would it be that much work to make the AUR code/installation more secure, rather then just dropping the functionality? just asking...
Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend.
this is kindof vague. "encoding issues"... issues at AUR side or client side? if the former, that would be a bug that could get fixed.
Generally, everyone should download and use the tarballs to build packages.
Yes, but I'm not talking about building packages, I'm talking about getting a quick idea of what the package contains and how it gets built/installed. for that, the "files" previous was very useful. Dieter
On Mon, Feb 21, 2011 at 11:55:59AM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 11:37:18 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 11:08:05AM +0100, Dieter Plaetinck wrote:
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
There were several vulnerabilities with the automatic tarball extraction. Think of "tarballs bombs" (as in "ZIP bombs"). Think of what happens when a source tarball that contains a symlink to "/etc/passwd" is uploaded (and the web server isn't chrooted). Just to give two simple samples.
Hmm.. would it be that much work to make the AUR code/installation more secure, rather then just dropping the functionality? just asking...
We'd have to: * Ensure that there are no CGI handlers in the incoming package dir (that was already the case). * Maintain a patched branch of Archive::Tar that disables the extraction of symlinks (optionally: make upstream include such a feature in mainline). * Add some code that calculates the total size of extracted files before accepting it. * Do all that in a way that can't be used to DoS the server.
Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend.
this is kindof vague. "encoding issues"... issues at AUR side or client side? if the former, that would be a bug that could get fixed.
I'm not sure. Probably both. It's obvious that if you copy and paste something from your browser, it won't be exactly the same as in the original tarball.
Generally, everyone should download and use the tarballs to build packages.
Yes, but I'm not talking about building packages, I'm talking about getting a quick idea of what the package contains and how it gets built/installed. for that, the "files" previous was very useful.
How often do you do that? Why don't you just download the tarball and check its contents? I also can't imagine a lot of cases where the PKGBUILD preview doesn't give you an idea of what a package does. If there really is need for such a thing, I'd also say this is something to do on the client side. AUR helpers might want to implement this. Or you can just check the cgit interface of the unofficial Git clone of the AUR [1]. [1] http://pkgbuild.com/git/aur.git/
On Mon, 21 Feb 2011 12:19:08 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend.
this is kindof vague. "encoding issues"... issues at AUR side or client side? if the former, that would be a bug that could get fixed.
I'm not sure. Probably both. It's obvious that if you copy and paste something from your browser, it won't be exactly the same as in the original tarball.
it's not obvious to me. Am I missing something? AFAIK, I should really get the same contents of text files pasted on my system (maybe encoded differently but that doesn't matter) provided all the characters shown can be decoded and encoded on my system. (and if that's not possible, then it's up to the user to configure his locales properly) Either way, like I said, the use case for showing files is more about previewing then aiding the building process.
Generally, everyone should download and use the tarballs to build packages.
Yes, but I'm not talking about building packages, I'm talking about getting a quick idea of what the package contains and how it gets built/installed. for that, the "files" previous was very useful.
How often do you do that? Why don't you just download the tarball and check its contents? I also can't imagine a lot of cases where the PKGBUILD preview doesn't give you an idea of what a package does.
If there really is need for such a thing, I'd also say this is something to do on the client side. AUR helpers might want to implement this. Or you can just check the cgit interface of the unofficial Git clone of the AUR [1].
Well, the problem is, if it would be all about the PKGBUILD alone, there would be no problem. But the mere fact that an aur contributor needs to upload source tarballs suggests there could be more stuff in there (install files, licence files, or even "dirtier" stuff), I could indeed look at the PKGBUILD but then I would need to inspect all the source code of the PKGBUILD which is much more mental work, which I try to avoid when I just want to get an idea of "what does this package contain" the reason I do this through the AUR webinterface is.. well, because there is a web interface. it's a bit cumbersome that I can do the package searching, looking at comments, looking at package info, ... in the webinterface, but not getting an idea of the contents of the source tarball. Maybe another point which is interesting to think about: you mentioned it would take several security precautions in AUR to prevent malicious source tarballs. By not doing this in AUR itself, doesn't that mean that every single AUR frontend should support this? If, as a user, I want to look at the source package, my aur client needs to fetch it, and extract it but it will need to do all those precautions you mentioned before. If AUR would take care of that, clients could be simpler. Dieter
On Mon, Feb 21, 2011 at 01:46:33PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 12:19:08 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
Moreover, I've heard of some encoding issues with users just copy-pasting files from the AUR frontend.
this is kindof vague. "encoding issues"... issues at AUR side or client side? if the former, that would be a bug that could get fixed.
I'm not sure. Probably both. It's obvious that if you copy and paste something from your browser, it won't be exactly the same as in the original tarball.
it's not obvious to me. Am I missing something? AFAIK, I should really get the same contents of text files pasted on my system (maybe encoded differently but that doesn't matter) provided all the characters shown can be decoded and encoded on my system. (and if that's not possible, then it's up to the user to configure his locales properly)
Yes, it's probably an encoding issue in most cases. And it's the rendering engine that doesn't always display stuff one-to-one. Best example are HTML files: If you copy the rendered page from some browser, you'll certainly not get the original HTML source again. That might be a bad example as, for the user, the differences are obvious in this case, but I'm sure that there are "better" ones around (I've heard of users reporting such things).
Generally, everyone should download and use the tarballs to build packages.
Yes, but I'm not talking about building packages, I'm talking about getting a quick idea of what the package contains and how it gets built/installed. for that, the "files" previous was very useful.
How often do you do that? Why don't you just download the tarball and check its contents? I also can't imagine a lot of cases where the PKGBUILD preview doesn't give you an idea of what a package does.
If there really is need for such a thing, I'd also say this is something to do on the client side. AUR helpers might want to implement this. Or you can just check the cgit interface of the unofficial Git clone of the AUR [1].
Well, the problem is, if it would be all about the PKGBUILD alone, there would be no problem. But the mere fact that an aur contributor needs to upload source tarballs suggests there could be more stuff in there (install files, licence files, or even "dirtier" stuff), I could indeed look at the PKGBUILD but then I would need to inspect all the source code of the PKGBUILD which is much more mental work, which I try to avoid when I just want to get an idea of "what does this package contain"
the reason I do this through the AUR webinterface is.. well, because there is a web interface. it's a bit cumbersome that I can do the package searching, looking at comments, looking at package info, ... in the webinterface, but not getting an idea of the contents of the source tarball.
Well, you still have a listing of all source files on the package details page - there are just no direct links to their contents. Can you give me one single example where you really need to have a quick look at the content of some source file to get an idea of what a package does? ".install" files only add users/groups, run some database upgrade scripts or display install messages in 95% of cases. License files will certainly never be needed to be inspected to make out the purpose of a package. The intention of patches can be guessed from their file names in most cases. If there really is need for a deep analysis, you can always download and extract the source tarball which should take about 5 seconds.
Maybe another point which is interesting to think about: you mentioned it would take several security precautions in AUR to prevent malicious source tarballs. By not doing this in AUR itself, doesn't that mean that every single AUR frontend should support this? If, as a user, I want to look at the source package, my aur client needs to fetch it, and extract it but it will need to do all those precautions you mentioned before. If AUR would take care of that, clients could be simpler.
Nope. Almost all of these things are harmful on the server side only. Your local system can't be DoS'ed by uploading malicious tarballs as there's no uploading to the clients. You decide which source tarballs to download. You can also abort the download/extraction process at any time. You won't bother about having a symlink to "/etc/passwd" somewhere in your directory tree. Giving read access to arbitrary files to everyone on a server is a much bigger issue. Trust me. XSS can only be done on the server. Executable scripts can't be executed by some remote attacker if they're just located somewhere inside your home directory. CGI/PHP scripts on the webserver are executable if the server isn't configured correctly. The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however. We also don't aim at moving functionality from the client side to the server side. It's the other way round. Keep the AUR simple, do complex and additional/optional operations on the client side.
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance. Dieter
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
On Mon, 21 Feb 2011 16:35:33 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
hmm maybe we mean different things. you are talking about exhausting ram/cpu/time, right? http://en.wikipedia.org/wiki/Zip_bomb In that case, sure, just leave it to the client. the problem is trivial enough. I was talking about bad filenames (like ../../foo, /foo, /root/foobar, /tmpl/blah, and whatever else is posible) that might be prevented with `tar -t` Dieter
On Mon 21 Feb 2011 16:42 +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 16:35:33 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
hmm maybe we mean different things. you are talking about exhausting ram/cpu/time, right? http://en.wikipedia.org/wiki/Zip_bomb In that case, sure, just leave it to the client. the problem is trivial enough.
I was talking about bad filenames (like ../../foo, /foo, /root/foobar, /tmpl/blah, and whatever else is posible) that might be prevented with `tar -t`
Yeah I was thinking we could be more strict about the source package format as well. For example rejecting any that have src or pkg directories. Those often contain upstream source code our actual builds by mistake. I think that's mostly from old packages where the uploader didn't use makepkg --source.
On Mon, Feb 21, 2011 at 04:42:49PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 16:35:33 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
hmm maybe we mean different things. you are talking about exhausting ram/cpu/time, right?
Yes, like having two 1GB large files `tar -czf`'ed and uploading the resulting tarball to the AUR. I don't think that can be detected without being vulnerable to DoS attacks.
http://en.wikipedia.org/wiki/Zip_bomb In that case, sure, just leave it to the client. the problem is trivial enough.
I was talking about bad filenames (like ../../foo, /foo, /root/foobar, /tmpl/blah, and whatever else is posible) that might be prevented with `tar -t`
Dieter
On 02/21/11 10:54, Lukas Fleischer wrote:
Yes, like having two 1GB large files `tar -czf`'ed and uploading the resulting tarball to the AUR. I don't think that can be detected without being vulnerable to DoS attacks.
What if the PKGBUILD itself is a 1GB file? For example a normal looking PKGBUILD followed by a billion newlines. That probably compresses pretty well. (/foolishly responding without reading code) -Isaac
On 02/22/2011 12:35 AM, Isaac Dupree wrote:
On 02/21/11 10:54, Lukas Fleischer wrote:
Yes, like having two 1GB large files `tar -czf`'ed and uploading the resulting tarball to the AUR. I don't think that can be detected without being vulnerable to DoS attacks.
What if the PKGBUILD itself is a 1GB file? For example a normal looking PKGBUILD followed by a billion newlines. That probably compresses pretty well.
(/foolishly responding without reading code)
-Isaac
actually if i remember well somebody did that in the past. -- Ionuț
On Tue 22 Feb 2011 00:51 +0200, Ionuț Bîru wrote:
On 02/22/2011 12:35 AM, Isaac Dupree wrote:
On 02/21/11 10:54, Lukas Fleischer wrote:
Yes, like having two 1GB large files `tar -czf`'ed and uploading the resulting tarball to the AUR. I don't think that can be detected without being vulnerable to DoS attacks.
What if the PKGBUILD itself is a 1GB file? For example a normal looking PKGBUILD followed by a billion newlines. That probably compresses pretty well.
(/foolishly responding without reading code)
-Isaac
actually if i remember well somebody did that in the past.
Yeah we really need to figure out a reliable way to reject these zip-bombs.
On Mon, Feb 21, 2011 at 06:16:08PM -0500, Loui Chang wrote:
On Tue 22 Feb 2011 00:51 +0200, Ionuț Bîru wrote:
On 02/22/2011 12:35 AM, Isaac Dupree wrote:
On 02/21/11 10:54, Lukas Fleischer wrote:
Yes, like having two 1GB large files `tar -czf`'ed and uploading the resulting tarball to the AUR. I don't think that can be detected without being vulnerable to DoS attacks.
What if the PKGBUILD itself is a 1GB file? For example a normal looking PKGBUILD followed by a billion newlines. That probably compresses pretty well.
(/foolishly responding without reading code)
-Isaac
actually if i remember well somebody did that in the past.
Yeah we really need to figure out a reliable way to reject these zip-bombs.
Work in progress [1] :p [1] https://bugs.archlinux.org/task/22991
On Mon 21 Feb 2011 16:35 +0100, Lukas Fleischer wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
It might be possible. There are xz -l and gunzip -l functions to preview the uncompressed size of archives without decompression.
On Mon, Feb 21, 2011 at 10:51:01AM -0500, Loui Chang wrote:
On Mon 21 Feb 2011 16:35 +0100, Lukas Fleischer wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
On Mon, 21 Feb 2011 14:50:39 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The only issue that might affect the end users as well is "ZIP bombs". Most users will probably notice such a thing before it is entirely extracted, just interrupt tar(1)/gzip(1) and send a removal request to aur-general, however.
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
It might be possible. There are xz -l and gunzip -l functions to preview the uncompressed size of archives without decompression.
Hm, that doesn't sound too bad. We'd need to integrate this with Archive::Tar tho... It might be best to open a feature request on the AUR bug tracker at this point.
On Mon 21 Feb 2011 16:57 +0100, Lukas Fleischer wrote:
On Mon, Feb 21, 2011 at 10:51:01AM -0500, Loui Chang wrote:
On Mon 21 Feb 2011 16:35 +0100, Lukas Fleischer wrote:
On Mon, Feb 21, 2011 at 03:46:47PM +0100, Dieter Plaetinck wrote:
hmmm. some good points. I guess I could try the suggested approach and see how I like it. However, now that you bring up the "zip bombs", do you think it's feasible to scan for them serverside without compromising security and/or making things needlessly complicated? it would be useful for clients if that one aspect could be filtered out in advance.
I don't think this is possible without decompressing the tarball which is again vulnerable to (D)DoS.
It might be possible. There are xz -l and gunzip -l functions to preview the uncompressed size of archives without decompression.
Hm, that doesn't sound too bad. We'd need to integrate this with Archive::Tar tho... It might be best to open a feature request on the AUR bug tracker at this point.
On Mon, 21 Feb 2011 11:21:41 -0500 Loui Chang <louipc.ist@gmail.com> wrote:
yeay. always nice when a certain discussion or train of thought (pun intended, esp. for Loui) leads to an unforseen new improvement :) Dieter
R.Daneel approves of these ideas! Some needed patches to my scraper (where did LocationID go?) but overall I'm very happy to see these changes. There are many ways to safely inspect tarballs, even to get around the zip bomb. I won't claim this is perfect, but it works for Aur3 and me. Listing paths has already been mentioned. Dotfiles, dotdirs, src/, pkg/ are all simple red flags. For that manner, *any* directories are often a sign something is wrong. As mentioned, you can get the size of files before extracting. I don't know enough about tars to know if an attacker could lie about the size. But even if they can, time/memory quotas greatly limit the damage as DoS could acheive. Files can also be processed as streams. I originally did binary detection via "file" (which needs the contects to be extracted, it can't be streamed through stdin) but have since implemented a stream-based UTF8 detector. Stream processing gets around disk attacks. Make the stream processor interruptable (when time quota is exceded) and it can return an estimate. By the way, I am not suggesting binary detection. It is just an example of something that lends itself very well to this method. -Kyle http://kmkeen.com
On Mon, Feb 21, 2011 at 11:46:51AM -0500, keenerd wrote:
R.Daneel approves of these ideas! Some needed patches to my scraper (where did LocationID go?) but overall I'm very happy to see these changes.
The "LocationID" field is gone [1].
There are many ways to safely inspect tarballs, even to get around the zip bomb. I won't claim this is perfect, but it works for Aur3 and me.
Listing paths has already been mentioned. Dotfiles, dotdirs, src/, pkg/ are all simple red flags. For that manner, *any* directories are often a sign something is wrong. As mentioned, you can get the size of files before extracting. I don't know enough about tars to know if an attacker could lie about the size. But even if they can, time/memory quotas greatly limit the damage as DoS could acheive.
See [2] and [3].
Files can also be processed as streams. I originally did binary detection via "file" (which needs the contects to be extracted, it can't be streamed through stdin) but have since implemented a stream-based UTF8 detector. Stream processing gets around disk attacks. Make the stream processor interruptable (when time quota is exceded) and it can return an estimate. By the way, I am not suggesting binary detection. It is just an example of something that lends itself very well to this method.
That is already done with the "PKGBUILD" (which is still extracted). [1] http://mailman.archlinux.org/pipermail/aur-dev/2011-January/001387.html [2] https://bugs.archlinux.org/task/22991 [3] https://bugs.archlinux.org/task/22995
On Mon, 21 Feb 2011 12:19:08 +0100, Lukas Fleischer wrote:
How often do you do that? Why don't you just download the tarball and check its contents? I also can't imagine a lot of cases where the PKGBUILD preview doesn't give you an idea of what a package does.
At least all the cases where there's a .install script. That said I'm fine with leaving the verification job to AUR helpers. -- Pierre 'catwell' Chapuis
On 21 February 2011 18:08, Dieter Plaetinck <dieter@plaetinck.be> wrote:
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
I've _always_ used this, almost on every package I came across. I don't want to be downloading anything I just want to take a rough look at. Would be good to have this back in some way or another. Brainstorm!
On Tue, Feb 22, 2011 at 02:03:38AM +0800, Ray Rashif wrote:
On 21 February 2011 18:08, Dieter Plaetinck <dieter@plaetinck.be> wrote:
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
I've _always_ used this, almost on every package I came across. I don't want to be downloading anything I just want to take a rough look at. Would be good to have this back in some way or another. Brainstorm!
Did you read all my replies on this topic? If you still think that this should be implemented no matter what, you'd better open a feature request on the bug tracker.
On 22 February 2011 02:06, Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Tue, Feb 22, 2011 at 02:03:38AM +0800, Ray Rashif wrote:
On 21 February 2011 18:08, Dieter Plaetinck <dieter@plaetinck.be> wrote:
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
I've _always_ used this, almost on every package I came across. I don't want to be downloading anything I just want to take a rough look at. Would be good to have this back in some way or another. Brainstorm!
Did you read all my replies on this topic? If you still think that this should be implemented no matter what, you'd better open a feature request on the bug tracker.
You do not really address this issue aside from shrugging it off as an unneeded feature that costs one or two vulnerabilities. If it was really that useless it would not have been implemented in the first place. The loopholes are real, but the feature should not be forgotten. I will leave it up to the community to file a request to have this back, because with that we can really see whether it actually is as useful as a few of us claim :)
On Tue, Feb 22, 2011 at 02:31:31AM +0800, Ray Rashif wrote:
On 22 February 2011 02:06, Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Tue, Feb 22, 2011 at 02:03:38AM +0800, Ray Rashif wrote:
On 21 February 2011 18:08, Dieter Plaetinck <dieter@plaetinck.be> wrote:
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
I've _always_ used this, almost on every package I came across. I don't want to be downloading anything I just want to take a rough look at. Would be good to have this back in some way or another. Brainstorm!
Did you read all my replies on this topic? If you still think that this should be implemented no matter what, you'd better open a feature request on the bug tracker.
You do not really address this issue aside from shrugging it off as an unneeded feature that costs one or two vulnerabilities. If it was really that useless it would not have been implemented in the first place. The loopholes are real, but the feature should not be forgotten.
I consider it as a nice-to-have feature. Can you give me a real reason why this is absolutely needed? Except from "I often feel bored and click random links in the AUR"? I pointed out earlier that it is possible to get an overview of a package without having direct access to all source files. Downloading and extracting some source tarball (just in case you want to review it in detail) literally takes 3-5 seconds. Even less if you script it. I always did it like that and I reviewed a lot of packages, believe me. I also already mentioned that we try to move the more advanced features to the client side. KISS for the AUR.
I will leave it up to the community to file a request to have this back, because with that we can really see whether it actually is as useful as a few of us claim :)
Yeah. And the bug tracker is a more appropriate place for this discussion :)
On Tue 22 Feb 2011 02:31 +0800, Ray Rashif wrote:
On 22 February 2011 02:06, Lukas Fleischer <archlinux@cryptocrack.de> wrote:
On Tue, Feb 22, 2011 at 02:03:38AM +0800, Ray Rashif wrote:
On 21 February 2011 18:08, Dieter Plaetinck <dieter@plaetinck.be> wrote:
On Mon, 21 Feb 2011 10:47:50 +0100 Lukas Fleischer <archlinux@cryptocrack.de> wrote:
The official Arch Linux AUR setup has been upgraded to 1.8.0. For a short list of changes, read [1].
Please report any issues on the AUR bug tracker [2].
[1] http://mailman.archlinux.org/pipermail/aur-dev/2011-February/001433.html [2] https://bugs.archlinux.org/index.php?project=2
what's the reasoning behind no longer showing all files in the "source package"? I found this feature quite useful.
I've _always_ used this, almost on every package I came across. I don't want to be downloading anything I just want to take a rough look at. Would be good to have this back in some way or another. Brainstorm!
Did you read all my replies on this topic? If you still think that this should be implemented no matter what, you'd better open a feature request on the bug tracker.
You do not really address this issue aside from shrugging it off as an unneeded feature that costs one or two vulnerabilities. If it was really that useless it would not have been implemented in the first place. The loopholes are real, but the feature should not be forgotten.
I will leave it up to the community to file a request to have this back, because with that we can really see whether it actually is as useful as a few of us claim :)
As another AUR developer I will echo Lukas' statements. Actually, even if there were no security issues I would support this change one hundred percent. Functionality needs to be moved to the clients for the better future of the AUR. Let's not think of the AUR as a mere webapp. I understand that there will be some growing pains, some users may not really like or understand why certain changes are being made. I did anticipate this, and that's one reason why I asked that the PKGBUILD still be available for easy viewing. Think of it as a screenshot - it will give you a glimpse into the package but not the whole idea. You will need to download it for that.
On Mon, 21 Feb 2011 14:46:24 -0500, Loui Chang wrote:
Functionality needs to be moved to the clients for the better future of the AUR. Let's not think of the AUR as a mere webapp.
Couldn't you think of the webapp as just another client? In which case it could use JavaScript to uncompress the files on the user's workstation and display them. -- Pierre 'catwell' Chapuis
On Tue, Feb 22, 2011 at 11:11:28AM +0100, Pierre Chapuis wrote:
On Mon, 21 Feb 2011 14:46:24 -0500, Loui Chang wrote:
Functionality needs to be moved to the clients for the better future of the AUR. Let's not think of the AUR as a mere webapp.
Couldn't you think of the webapp as just another client? In which case it could use JavaScript to uncompress the files on the user's workstation and display them.
Oh my god, please don't. The AUR is considered to be lightweight and simple. Feel free to write yet another frontend tho.
participants (11)
-
Allan McRae
-
Cédric Girard
-
Dieter Plaetinck
-
Ionuț Bîru
-
Isaac Dupree
-
keenerd
-
Loui Chang
-
Lukas Fleischer
-
Pierre Chapuis
-
Ray Rashif
-
Seblu