[arch-dev-public] [PATCH 4/4] ftpdir-cleanup: lock repo before doing the cleanup
This should fix the problem of the ftpdir-cleanup script removing the new package instead of the old one (FS#17058). The script makes $LOCK_TRIAL attempts, each separated by $LOCK_DELAY seconds, in getting the repo lock. If the lock is unsuccessful, the cleanup is skipped for that particular repo. Signed-off-by: Eric Bélanger <snowmaniscool@gmail.com> BTW, I didn't really tested this except the while loop logic. --- misc-scripts/ftpdir-cleanup | 28 +++++++++++++++++++++++++--- 1 files changed, 25 insertions(+), 3 deletions(-) diff --git a/misc-scripts/ftpdir-cleanup b/misc-scripts/ftpdir-cleanup index a185090..77f79e6 100755 --- a/misc-scripts/ftpdir-cleanup +++ b/misc-scripts/ftpdir-cleanup @@ -10,15 +10,35 @@ dest=$2 ############################################################ -. "$(dirname $0)/../db-functions" +. "$(dirname $0)/../db-functions" . "$(dirname $0)/../config" ${CLEANUP_DRYRUN} && echo 'dry run mode is active' ftppath_base="$FTP_BASE/$reponame/$FTP_OS_SUFFIX" +LOCK_DELAY=10 +LOCK_TRIAL=6 + for arch in ${ARCHES[@]}; do + IS_LOCKED=0 + count=0 + while [ $count -le $LOCK_TRIAL ]; do + if repo_lock $reponame $arch ; then + IS_LOCKED=1 + let count=$LOCK_TRIAL+1 + continue + fi + sleep $LOCK_DELAY + let count=$count+1 + done + + if [ $IS_LOCKED -eq 0 ]; then + echo "Failed to lock $reponame $arch repo" + exit 1 + fi + TMPDIR=$(mktemp -d /tmp/cleanup-XXXXXX) || exit 1 ftppath="$ftppath_base/$arch" MISSINGFILES="" @@ -80,10 +100,10 @@ for arch in ${ARCHES[@]}; do dbpkgname=$(grep -A1 '^%FILENAME%$' "${p}/desc" 2>/dev/null| tail -n1) if [ "${dbpkgname}" = "${pkgname}" ]; then continue 2 - fi + fi done EXTRAFILES="$EXTRAFILES $pkg" - done + done rm -rf ${TMPDIR} @@ -100,6 +120,8 @@ for arch in ${ARCHES[@]}; do fi done + repo_unlock $reponame $arch + #Make sure we've done *something* before outputting anything if [ -z "$DELETEFILES$DELETESYMLINKS$MISSINGFILES$EXTRAFILES" ]; then continue -- 1.7.0
On 25/02/10 11:36, Eric Bélanger wrote:
This should fix the problem of the ftpdir-cleanup script removing the new package instead of the old one (FS#17058). The script makes $LOCK_TRIAL attempts, each separated by $LOCK_DELAY seconds, in getting the repo lock. If the lock is unsuccessful, the cleanup is skipped for that particular repo.
Signed-off-by: Eric Bélanger<snowmaniscool@gmail.com>
BTW, I didn't really tested this except the while loop logic.
Somewhat of an aside... How long does the repo cleanup script generally take to run? Allan
Am 25.02.2010 02:36, schrieb Eric Bélanger:
This should fix the problem of the ftpdir-cleanup script removing the new package instead of the old one (FS#17058). The script makes $LOCK_TRIAL attempts, each separated by $LOCK_DELAY seconds, in getting the repo lock. If the lock is unsuccessful, the cleanup is skipped for that particular repo.
Signed-off-by: Eric Bélanger <snowmaniscool@gmail.com>
BTW, I didn't really tested this except the while loop logic. --- misc-scripts/ftpdir-cleanup | 28 +++++++++++++++++++++++++--- 1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/misc-scripts/ftpdir-cleanup b/misc-scripts/ftpdir-cleanup index a185090..77f79e6 100755 --- a/misc-scripts/ftpdir-cleanup +++ b/misc-scripts/ftpdir-cleanup @@ -10,15 +10,35 @@ dest=$2
############################################################
-. "$(dirname $0)/../db-functions" +. "$(dirname $0)/../db-functions" . "$(dirname $0)/../config"
${CLEANUP_DRYRUN} && echo 'dry run mode is active'
ftppath_base="$FTP_BASE/$reponame/$FTP_OS_SUFFIX"
+LOCK_DELAY=10 +LOCK_TRIAL=6 + for arch in ${ARCHES[@]}; do
+ IS_LOCKED=0 + count=0 + while [ $count -le $LOCK_TRIAL ]; do + if repo_lock $reponame $arch ; then + IS_LOCKED=1 + let count=$LOCK_TRIAL+1 + continue + fi + sleep $LOCK_DELAY + let count=$count+1 + done + + if [ $IS_LOCKED -eq 0 ]; then + echo "Failed to lock $reponame $arch repo" + exit 1 + fi + TMPDIR=$(mktemp -d /tmp/cleanup-XXXXXX) || exit 1 ftppath="$ftppath_base/$arch" MISSINGFILES=""
This should be made into a generic function instead of just being in cleanup. Also, this seems like a duplication of kernel/glibc functionality - the flock(1) command should probably be used in repo_lock instead of the manual locking process we use.
On Thu, Feb 25, 2010 at 1:46 AM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 25.02.2010 02:36, schrieb Eric Bélanger:
This should fix the problem of the ftpdir-cleanup script removing the new package instead of the old one (FS#17058). The script makes $LOCK_TRIAL attempts, each separated by $LOCK_DELAY seconds, in getting the repo lock. If the lock is unsuccessful, the cleanup is skipped for that particular repo.
Signed-off-by: Eric Bélanger <snowmaniscool@gmail.com>
BTW, I didn't really tested this except the while loop logic. --- misc-scripts/ftpdir-cleanup | 28 +++++++++++++++++++++++++--- 1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/misc-scripts/ftpdir-cleanup b/misc-scripts/ftpdir-cleanup index a185090..77f79e6 100755 --- a/misc-scripts/ftpdir-cleanup +++ b/misc-scripts/ftpdir-cleanup @@ -10,15 +10,35 @@ dest=$2
############################################################
-. "$(dirname $0)/../db-functions" +. "$(dirname $0)/../db-functions" . "$(dirname $0)/../config"
${CLEANUP_DRYRUN} && echo 'dry run mode is active'
ftppath_base="$FTP_BASE/$reponame/$FTP_OS_SUFFIX"
+LOCK_DELAY=10 +LOCK_TRIAL=6 + for arch in ${ARCHES[@]}; do
+ IS_LOCKED=0 + count=0 + while [ $count -le $LOCK_TRIAL ]; do + if repo_lock $reponame $arch ; then + IS_LOCKED=1 + let count=$LOCK_TRIAL+1 + continue + fi + sleep $LOCK_DELAY + let count=$count+1 + done + + if [ $IS_LOCKED -eq 0 ]; then + echo "Failed to lock $reponame $arch repo" + exit 1 + fi + TMPDIR=$(mktemp -d /tmp/cleanup-XXXXXX) || exit 1 ftppath="$ftppath_base/$arch" MISSINGFILES=""
This should be made into a generic function instead of just being in cleanup. Also, this seems like a duplication of kernel/glibc functionality - the flock(1) command should probably be used in repo_lock instead of the manual locking process we use.
+1 on flock usage across the board. -Dan
On Thu, Feb 25, 2010 at 11:59 AM, Dan McGee <dpmcgee@gmail.com> wrote:
On Thu, Feb 25, 2010 at 1:46 AM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 25.02.2010 02:36, schrieb Eric Bélanger:
This should fix the problem of the ftpdir-cleanup script removing the new package instead of the old one (FS#17058). The script makes $LOCK_TRIAL attempts, each separated by $LOCK_DELAY seconds, in getting the repo lock. If the lock is unsuccessful, the cleanup is skipped for that particular repo.
Signed-off-by: Eric Bélanger <snowmaniscool@gmail.com>
BTW, I didn't really tested this except the while loop logic. --- misc-scripts/ftpdir-cleanup | 28 +++++++++++++++++++++++++--- 1 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/misc-scripts/ftpdir-cleanup b/misc-scripts/ftpdir-cleanup index a185090..77f79e6 100755 --- a/misc-scripts/ftpdir-cleanup +++ b/misc-scripts/ftpdir-cleanup @@ -10,15 +10,35 @@ dest=$2
############################################################
-. "$(dirname $0)/../db-functions" +. "$(dirname $0)/../db-functions" . "$(dirname $0)/../config"
${CLEANUP_DRYRUN} && echo 'dry run mode is active'
ftppath_base="$FTP_BASE/$reponame/$FTP_OS_SUFFIX"
+LOCK_DELAY=10 +LOCK_TRIAL=6 + for arch in ${ARCHES[@]}; do
+ IS_LOCKED=0 + count=0 + while [ $count -le $LOCK_TRIAL ]; do + if repo_lock $reponame $arch ; then + IS_LOCKED=1 + let count=$LOCK_TRIAL+1 + continue + fi + sleep $LOCK_DELAY + let count=$count+1 + done + + if [ $IS_LOCKED -eq 0 ]; then + echo "Failed to lock $reponame $arch repo" + exit 1 + fi + TMPDIR=$(mktemp -d /tmp/cleanup-XXXXXX) || exit 1 ftppath="$ftppath_base/$arch" MISSINGFILES=""
This should be made into a generic function instead of just being in cleanup. Also, this seems like a duplication of kernel/glibc functionality - the flock(1) command should probably be used in repo_lock instead of the manual locking process we use.
+1 on flock usage across the board.
-Dan
Yeah, I'll look into using flock in our repo_lock function instead of doing this while loop. BTW, should the timeout time be an argument of repo_lock, i.e. each script could set its own value ? Or should we use a common value (60 seconds ?) defined in config?
Am 25.02.2010 19:44, schrieb Eric Bélanger:
Yeah, I'll look into using flock in our repo_lock function instead of doing this while loop. BTW, should the timeout time be an argument of repo_lock, i.e. each script could set its own value ? Or should we use a common value (60 seconds ?) defined in config?
I like the option. The scripts we call manually could have an infinite timeout (we can abort them manually), while the cronjobs could use very long (5 minutes) static timeouts.
On Thu, Feb 25, 2010 at 2:16 PM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 25.02.2010 19:44, schrieb Eric Bélanger:
Yeah, I'll look into using flock in our repo_lock function instead of doing this while loop. BTW, should the timeout time be an argument of repo_lock, i.e. each script could set its own value ? Or should we use a common value (60 seconds ?) defined in config?
I like the option. The scripts we call manually could have an infinite timeout (we can abort them manually), while the cronjobs could use very long (5 minutes) static timeouts.
After reading the flock man page, Thomas and I realized that we can't use flock in our current repo lock/unlock functions. We would need to get rid of the locking functions and explicitely use flock wherever it would be needed or do some significant code refactoring. Maybe someone with experience in using flock could shed some light on this. So we probably want to use flock eventually, but it'll be for the long term. Meanwhile, I just sent a new patch for the current repo locking functions. It makes the locking atomic and adds an optional timeout option.
participants (4)
-
Allan McRae
-
Dan McGee
-
Eric Bélanger
-
Thomas Bächler