On Tue, Sep 15, 2009 at 2:02 PM, Firmicus <Firmicus@gmx.net> wrote:
Aaron Griffin a écrit :
On Tue, Sep 15, 2009 at 3:28 AM, Firmicus <Firmicus@gmx.net> wrote:
It happened to me in the past that I ran "extrapkg" by mistake on a package that had not been updated. It still ended up in ~/staging/extra on gerolde, and when I ran "/arch/db-update extra" it died in the middle of the process of updating a dozen packages. I had to clean the mess by hand.
This patch avoids this situation by checking, before starting the update process, whether any pkg in the staging dir is actually already present in the ftp repo, and in such a case removing it from the staging dir.
It almost seems like we should use vercmp for something like this, rather than checking for existence.
I don't see the advantage this would have.
I guess it's not BAD if it copies and old package there (the cleanup script will kill it just fine), but it could be avoided
But it won't: if the file exists, the script will die and everything else will be left in a half-done state. Consider for instance lines 217-219:
if ! /bin/cp "$f" "$ftppath/"; then die "error: failure while copying files to $ftppath" fi
Of course this does not prevent catching errors at the commitpkg stage.
Oh no, I was thinking of the case where you had, say, version 1.3 sitting in staging, and version 1.4 is in the repos Your version won't catch this, as it doesn't use vercmp. But, repo-add will catch it and do nothing. I think the package will still get moved to the ftp though