[pacman-dev] [PATCH 1/4] makepkg: checkout a revision specified in SVN fragment in download_svn.
Previously the sources were dowloaded in HEAD revision in the download_svn(). If a specific revision was requested in fragment, the code was updated to that revision in extract_svn(). However, because SVN is a centralized system, this means that the changed sources has to be downloaded again. By moving the fragment handling to download_svn(), we get the correct revision without having to download it later in extract_svn(). Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 43 +++++++++++++++---------------------------- 1 file changed, 15 insertions(+), 28 deletions(-) diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 28e8e7a..aeb231a 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -706,10 +706,23 @@ download_svn() { fi url=${url%%#*} + local ref=HEAD + if [[ -n $fragment ]]; then + case ${fragment%%=*} in + revision) + ref="${fragment##*=}" + ;; + *) + error "$(gettext "Unrecognized reference: %s")" "${fragment}" + plain "$(gettext "Aborting...")" + exit 1 + esac + fi + if [[ ! -d "$dir" ]] || dir_is_empty "$dir" ; then msg2 "$(gettext "Cloning %s %s repo...")" "${repo}" "svn" mkdir -p "$dir/.makepkg" - if ! svn checkout --config-dir "$dir/.makepkg" "$url" "$dir"; then + if ! svn checkout -r ${ref} --config-dir "$dir/.makepkg" "$url" "$dir"; then error "$(gettext "Failure while downloading %s %s repo")" "${repo}" "svn" plain "$(gettext "Aborting...")" exit 1 @@ -717,7 +730,7 @@ download_svn() { elif (( ! HOLDVER )); then msg2 "$(gettext "Updating %s %s repo...")" "${repo}" "svn" cd_safe "$dir" - if ! svn update; then + if ! svn update -r ${ref}; then # only warn on failure to allow offline builds warning "$(gettext "Failure while updating %s %s repo")" "${repo}" "svn" fi @@ -727,11 +740,6 @@ download_svn() { extract_svn() { local netfile=$1 - local fragment=${netfile#*#} - if [[ $fragment = "$netfile" ]]; then - unset fragment - fi - local dir=$(get_filepath "$netfile") [[ -z "$dir" ]] && dir="$SRCDEST/$(get_filename "$netfile")" @@ -742,29 +750,8 @@ extract_svn() { pushd "$srcdir" &>/dev/null rm -rf "${dir##*/}" - local ref - if [[ -n $fragment ]]; then - case ${fragment%%=*} in - revision) - ref="${fragment##*=}" - ;; - *) - error "$(gettext "Unrecognized reference: %s")" "${fragment}" - plain "$(gettext "Aborting...")" - exit 1 - esac - fi - cp -a "$dir" . - if [[ -n ${ref} ]]; then - cd_safe "$(get_filename "$netfile")" - if ! svn update -r ${ref}; then - error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "svn" - plain "$(gettext "Aborting...")" - fi - fi - popd &>/dev/null } -- 1.8.5.1
This matches the behaviour with non-VCS sources. It also allows incremental builds when subversion is used to obtain sources. Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index aeb231a..84183b0 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -747,12 +747,8 @@ extract_svn() { repo=${repo%%#*} msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "svn" - pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}" - cp -a "$dir" . - - popd &>/dev/null + cp -au "$dir" "$srcdir" } download_sources() { -- 1.8.5.1
On Mon, Dec 09, 2013 at 09:31:21PM +0100, Lukáš Jirkovský wrote:
This matches the behaviour with non-VCS sources. It also allows incremental builds when subversion is used to obtain sources.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index aeb231a..84183b0 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -747,12 +747,8 @@ extract_svn() { repo=${repo%%#*}
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "svn" - pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- cp -a "$dir" . - - popd &>/dev/null + cp -au "$dir" "$srcdir"
What about deleted files? This will break builds...
}
download_sources() { -- 1.8.5.1
On Mon, Dec 9, 2013 at 9:45 PM, Dave Reisner <d@falconindy.com> wrote:
What about deleted files? This will break builds...
I'm aware of that. Usually the left over files cause no problems, but I can think of some (crappy) build system that just compiles everything that you throw at it in which case renamed files would cause conflicts during compilation. I've tried a few packages and I had no problems so far. For me the possibility to do incremental builds outweighs the possible problems. Fortunately I don't use many packages that still use SVN. Lukas
On 10/12/13 18:48, Lukas Jirkovsky wrote:
On Mon, Dec 9, 2013 at 9:45 PM, Dave Reisner <d@falconindy.com> wrote:
What about deleted files? This will break builds...
I'm aware of that. Usually the left over files cause no problems, but I can think of some (crappy) build system that just compiles everything that you throw at it in which case renamed files would cause conflicts during compilation.
I've tried a few packages and I had no problems so far. For me the possibility to do incremental builds outweighs the possible problems. Fortunately I don't use many packages that still use SVN.
There will be plenty of build systems that will use @.c (made up psuedo-make syntax) to compile all files. Lets weigh this up: - incremental builds are a big gain - it is would seem more likely files are added than removed - we have the -C option to do a clean build if needed (a.k.a current behavior) I'd suspect that anyone doing an incremental build that failed would be likely to try a clean build anyway. So, I think this is a net gain and should be pulled. @Dave: does that make sense to you? Allan
On Wed, Dec 11, 2013 at 6:08 AM, Allan McRae <allan@archlinux.org> wrote:
There will be plenty of build systems that will use @.c (made up psuedo-make syntax) to compile all files.
I think I've a few ideas how to fix the problem with deleted files. First idea is: 1. checkout a new revision into a temporary directory 2. create patch between the current checkout in $SRCDEST and the checkout from 1. 3. replace the checkout in $SRCDEST with the temporary checkout from 1. 4. apply the patch from 2. to the sources in $srcdir 5. update .svn of the checkout in $srcdir using the one from $SRCDEST The temporary checkout could by avoided if the patch was created using "svn diff" but that would probably download the changes twice. Second idea is to do the update the other way for svn: 1. run "svn update" on the checkout in $srcdir 2. replace the sources in $SRCDEST with "svn export" from $srcdir 3. copy .svn from checkout in $srcdir to the sources in $SRCDEST effectively making it a valid checkout The last idea: Log deleted files from the "svn update" output and then delete them manually. However, I'm afraid of such approach, because it would break if the update output changed. Lukas
On 11/12/13 19:31, Lukas Jirkovsky wrote:
On Wed, Dec 11, 2013 at 6:08 AM, Allan McRae <allan@archlinux.org> wrote:
There will be plenty of build systems that will use @.c (made up psuedo-make syntax) to compile all files.
I think I've a few ideas how to fix the problem with deleted files.
First idea is:
1. checkout a new revision into a temporary directory 2. create patch between the current checkout in $SRCDEST and the checkout from 1. 3. replace the checkout in $SRCDEST with the temporary checkout from 1. 4. apply the patch from 2. to the sources in $srcdir 5. update .svn of the checkout in $srcdir using the one from $SRCDEST
The temporary checkout could by avoided if the patch was created using "svn diff" but that would probably download the changes twice.
Second idea is to do the update the other way for svn: 1. run "svn update" on the checkout in $srcdir 2. replace the sources in $SRCDEST with "svn export" from $srcdir 3. copy .svn from checkout in $srcdir to the sources in $SRCDEST effectively making it a valid checkout
If we were going for something like these two... I'd say: 1) Copy checkout in $SRCDEST to temporary directory 2) Update checkout in $SRCDEST 3) take diff and delete temporary copy 4) apply diff to copy in $srcdir which still involves temporary copies which could be quite large.
The last idea: Log deleted files from the "svn update" output and then delete them manually. However, I'm afraid of such approach, because it would break if the update output changed.
Definitely not! Parsing output of external software is bad. In the end, I think we need to just accept that SVN in a centralised system and it is difficult to do this perfectly. Allan
B From: Allan McRaeb Sent: Saturday, 14 December 2013 1:14 PM To: Discussion list for pacman development Reply To: Discussion list for pacman development Subject: Re: [pacman-dev] [PATCH 2/4] makepkg: svn: update existing sources in srcdir without removing them first.
On 10/12/13 06:31, Lukáš Jirkovský wrote:
This matches the behaviour with non-VCS sources. It also allows incremental builds when subversion is used to obtain sources.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> ---
OK. There is not much we can do about files potentially not being removed if we jump between revisions here. We need a note added to the VCS section of the PKGBUILD man page that we try to do an incremental build and to use -C for non-distributed VCS (only SVN) if the file layout of the repo has changed.
scripts/makepkg.sh.in | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index aeb231a..84183b0 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -747,12 +747,8 @@ extract_svn() { repo=${repo%%#*}
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "svn" - pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- cp -a "$dir" . - - popd &>/dev/null + cp -au "$dir" "$srcdir" }
download_sources() {
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds. Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 84183b0..1421bec 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -663,13 +663,12 @@ extract_hg() { msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "hg" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}" - local ref + local ref=tip if [[ -n $fragment ]]; then case ${fragment%%=*} in branch|revision|tag) - ref=('-u' "${fragment##*=}") + ref="${fragment##*=}" ;; *) error "$(gettext "Unrecognized reference: %s")" "${fragment}" @@ -678,7 +677,14 @@ extract_hg() { esac fi - if ! hg clone "${ref[@]}" "$dir" "${dir##*/}"; then + if [[ -d "${dir##*/}" ]]; then + cd_safe "${dir##*/}" + if ! (hg pull && hg update -C -r "$ref"); then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "hg" + plain "$(gettext "Aborting...")" + exit 1 + fi + elif ! hg clone -u "$ref" "$dir" "${dir##*/}"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "hg" plain "$(gettext "Aborting...")" exit 1 -- 1.8.5.1
On 10/12/13 06:31, Lukáš Jirkovský wrote:
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> ---
I can not find any evidence of hg pull having a -C flag.
scripts/makepkg.sh.in | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 84183b0..1421bec 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -663,13 +663,12 @@ extract_hg() {
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "hg" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- local ref + local ref=tip if [[ -n $fragment ]]; then case ${fragment%%=*} in branch|revision|tag) - ref=('-u' "${fragment##*=}") + ref="${fragment##*=}" ;; *) error "$(gettext "Unrecognized reference: %s")" "${fragment}" @@ -678,7 +677,14 @@ extract_hg() { esac fi
- if ! hg clone "${ref[@]}" "$dir" "${dir##*/}"; then + if [[ -d "${dir##*/}" ]]; then + cd_safe "${dir##*/}" + if ! (hg pull && hg update -C -r "$ref"); then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "hg" + plain "$(gettext "Aborting...")" + exit 1 + fi + elif ! hg clone -u "$ref" "$dir" "${dir##*/}"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "hg" plain "$(gettext "Aborting...")" exit 1
On 09/11/14 18:19, Allan McRae wrote:
On 10/12/13 06:31, Lukáš Jirkovský wrote:
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> ---
I can not find any evidence of hg pull having a -C flag.
And of course, it is hg update using the -C. Patch looks good. Allan
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds. Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 1421bec..99af551 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -581,9 +581,18 @@ extract_git() { msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "git" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}" - if ! git clone "$dir"; then + local updating=false + if [[ -d "${dir##*/}" ]]; then + updating=true + cd_safe "${dir##*/}" + if ! git fetch; then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "git" + plain "$(gettext "Aborting...")" + exit 1 + fi + cd_safe "$srcdir" + elif ! git clone "$dir"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 @@ -591,7 +600,7 @@ extract_git() { cd_safe "${dir##*/}" - local ref + local ref=origin/HEAD if [[ -n $fragment ]]; then case ${fragment%%=*} in commit|tag) @@ -607,8 +616,8 @@ extract_git() { esac fi - if [[ -n $ref ]]; then - if ! git checkout -b makepkg $ref; then + if [[ -n $ref ]] || ((updating)) ; then + if ! git checkout --force -B makepkg $ref; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 -- 1.8.5.1
Sorry for top-posting in the previous message. See below. On Mon, Dec 9, 2013 at 3:31 PM, Lukáš Jirkovský <l.jirkovsky@gmail.com> wrote:
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 1421bec..99af551 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -581,9 +581,18 @@ extract_git() {
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "git" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- if ! git clone "$dir"; then + local updating=false + if [[ -d "${dir##*/}" ]]; then + updating=true + cd_safe "${dir##*/}" + if ! git fetch; then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "git" + plain "$(gettext "Aborting...")" + exit 1 + fi + cd_safe "$srcdir" + elif ! git clone "$dir"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 @@ -591,7 +600,7 @@ extract_git() {
cd_safe "${dir##*/}"
- local ref + local ref=origin/HEAD if [[ -n $fragment ]]; then case ${fragment%%=*} in commit|tag) @@ -607,8 +616,8 @@ extract_git() { esac fi
- if [[ -n $ref ]]; then - if ! git checkout -b makepkg $ref; then + if [[ -n $ref ]] || ((updating)) ; then + if ! git checkout --force -B makepkg $ref; then
This checkout checks out whatever has been cloned, but the git remote (origin by default) may be pointing to the wrong URL/source array entry, since you simply reuse the same .git/config (since you didn't rm -rf and git clone again). Therefore, this will break any update that changes repository URLs, and require the user or AUR wrapper to manually go in and delete the checked out repository... If saving yourself the step of having to "git clone" again is your only goal, here are two possible ways to solve that problem: 1) Use the $GIT_ALTERNATE_OBJECT_DIRECTORIES environment variable or .git/objects/info/alternates file mechanism, and use an object store that is detached from the git clone, e.g. in some generic directory (e.g. $startdir/gitobjects), and don't delete that directory. That way, git will not re-download the objects (actual data) that it already fetched, only update the refs and fill in the missing objects in the object store you specify. See https://www.kernel.org/pub/software/scm/git/docs/gitrepository-layout.html for more info...) 2) Make sure to set the git remote each time when updating, using the appropriate "git remote" command. This has the downside that you are replicating "git clone" functionality. Your patches for Mercurial, SVN, etc. have a similar problem...
error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 -- 1.8.5.1
Oh, also: On Mon, Dec 9, 2013 at 3:57 PM, Ido Rosen <ido@kernel.org> wrote:
If saving yourself the step of having to "git clone" again is your only goal, here are two possible ways to solve that problem:
1) Use the $GIT_ALTERNATE_OBJECT_DIRECTORIES environment variable or .git/objects/info/alternates file mechanism, and use an object store that is detached from the git clone, e.g. in some generic directory (e.g. $startdir/gitobjects), and don't delete that directory. That way, git will not re-download the objects (actual data) that it already fetched, only update the refs and fill in the missing objects in the object store you specify. See https://www.kernel.org/pub/software/scm/git/docs/gitrepository-layout.html for more info...)
2) Make sure to set the git remote each time when updating, using the appropriate "git remote" command. This has the downside that you are replicating "git clone" functionality.
3) Avoidance strategy: Don't clone/fetch all the objects in the repository first place by doing a "shallow clone" by setting the --depth in git clone. https://www.kernel.org/pub/software/scm/git/docs/git-clone.html
Your patches for Mercurial, SVN, etc. have a similar problem...
On Mon, Dec 09, 2013 at 04:00:55PM -0500, Ido Rosen wrote:
Oh, also:
On Mon, Dec 9, 2013 at 3:57 PM, Ido Rosen <ido@kernel.org> wrote:
If saving yourself the step of having to "git clone" again is your only goal, here are two possible ways to solve that problem:
1) Use the $GIT_ALTERNATE_OBJECT_DIRECTORIES environment variable or .git/objects/info/alternates file mechanism, and use an object store that is detached from the git clone, e.g. in some generic directory (e.g. $startdir/gitobjects), and don't delete that directory. That way, git will not re-download the objects (actual data) that it already fetched, only update the refs and fill in the missing objects in the object store you specify. See https://www.kernel.org/pub/software/scm/git/docs/gitrepository-layout.html for more info...)
2) Make sure to set the git remote each time when updating, using the appropriate "git remote" command. This has the downside that you are replicating "git clone" functionality.
3) Avoidance strategy: Don't clone/fetch all the objects in the repository first place by doing a "shallow clone" by setting the --depth in git clone. https://www.kernel.org/pub/software/scm/git/docs/git-clone.html
This groundbreaking idea has been proposed and rejected several times already.
Your patches for Mercurial, SVN, etc. have a similar problem...
On Mon, Dec 9, 2013 at 10:10 PM, Dave Reisner <d@falconindy.com> wrote:
This groundbreaking idea has been proposed and rejected several times already.
Can you point me to where they were rejected? I have submitted the patches implementing this functionality in the past, but they were rejected because of technical problems, not because of the idea itself. This new patch series should fix all the issues that were present in my previous attempts. Also I remember that Allan set FS#35050 as due in 4.2.0 probably because I was working on it. Lukas
On Tue, Dec 10, 2013 at 09:39:51AM +0100, Lukas Jirkovsky wrote:
On Mon, Dec 9, 2013 at 10:10 PM, Dave Reisner <d@falconindy.com> wrote:
This groundbreaking idea has been proposed and rejected several times already.
Can you point me to where they were rejected? I have submitted the patches implementing this functionality in the past, but they were rejected because of technical problems, not because of the idea itself. This new patch series should fix all the issues that were present in my previous attempts. Also I remember that Allan set FS#35050 as due in 4.2.0 probably because I was working on it.
Doing shallow clones breaks way too much to be worth it in git: https://mailman.archlinux.org/pipermail/aur-general/2013-April/022938.html I'm just going to say that I am greatly opposed to using --depth=1 in the default source array. If you want --depth=1 you can use the prepare function, but I would still hazard against that because it's quite limiting with what you can do. -- William Giokas | KaiSforza | http://kaictl.net/ GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On Tue, Dec 10, 2013 at 09:39:51AM +0100, Lukas Jirkovsky wrote:
On Mon, Dec 9, 2013 at 10:10 PM, Dave Reisner <d@falconindy.com> wrote:
This groundbreaking idea has been proposed and rejected several times already.
Can you point me to where they were rejected? I have submitted the patches implementing this functionality in the past, but they were rejected because of technical problems, not because of the idea itself. This new patch series should fix all the issues that were present in my previous attempts. Also I remember that Allan set FS#35050 as due in 4.2.0 probably because I was working on it.
For the tl;dr: Here's how we use git sources at the moment: - Create bare clone (no checked out files) - Clone the bare clone to build dir - Build - Possible cleanup With a shallow clone the first two steps won't work at all. Look at the manual page for git-clone(1): A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it)... As you can see, we can't do the second step, and if we do the first step with --depth=1, then it would essentially just give us a useless set of files. Thanks, -- William Giokas | KaiSforza | http://kaictl.net/ GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On 10/12/13 18:39, Lukas Jirkovsky wrote:
On Mon, Dec 9, 2013 at 10:10 PM, Dave Reisner <d@falconindy.com> wrote:
This groundbreaking idea has been proposed and rejected several times already.
Can you point me to where they were rejected? I have submitted the patches implementing this functionality in the past, but they were rejected because of technical problems, not because of the idea itself. This new patch series should fix all the issues that were present in my previous attempts. Also I remember that Allan set FS#35050 as due in 4.2.0 probably because I was working on it.
Note that Dave's reply was in regards to: On 10/12/13 07:00, Ido Rosen wrote:
3) Avoidance strategy: Don't clone/fetch all the objects in the repository first place by doing a "shallow clone" by setting the --depth in git clone. https://www.kernel.org/pub/software/scm/git/docs/git-clone.html
Getting incremental builds for VCS is wanted.
On Sat, Dec 14, 2013 at 3:00 AM, Allan McRae <allan@archlinux.org> wrote:
Note that Dave's reply was in regards to:
On 10/12/13 07:00, Ido Rosen wrote:
3) Avoidance strategy: Don't clone/fetch all the objects in the repository first place by doing a "shallow clone" by setting the --depth in git clone. https://www.kernel.org/pub/software/scm/git/docs/git-clone.html
Getting incremental builds for VCS is wanted.
Any news on this? Lukas
On 04/01/14 22:20, Lukas Jirkovsky wrote:
On Sat, Dec 14, 2013 at 3:00 AM, Allan McRae <allan@archlinux.org> wrote:
Note that Dave's reply was in regards to:
On 10/12/13 07:00, Ido Rosen wrote:
3) Avoidance strategy: Don't clone/fetch all the objects in the repository first place by doing a "shallow clone" by setting the --depth in git clone. https://www.kernel.org/pub/software/scm/git/docs/git-clone.html
Getting incremental builds for VCS is wanted.
Any news on this?
The news is I took a look a decided I will need to review the patch when I have a suitable block of time to do so. This will definitely be before the next release. Allan
On 6 January 2014 02:35, Allan McRae <allan@archlinux.org> wrote:
The news is I took a look a decided I will need to review the patch when I have a suitable block of time to do so. This will definitely be before the next release.
Allan
What is the status of these patches? I've been using them for git and mercurial PKGBUILDs since they were first posted and I haven't seen a single glitch yet. Also, I wanted to share some of my opinions regarding the implementation of updates with bazaar. I found out that the current implementation doesn't use DVCS features effectively. I will illustrate it on an example on how extract_bzr() works now: 1. bzr checkout "$dir" -r "$rev" --lightweight this does a lightweight checkout of $dir at a revision rev. "bzr revno" shows the most recent revision (ie. the revision corresponding to the "upstream" dir). 2. bzr pull "$dir" -q --overwrite -r "$rev" this basically synchronizes $dir and the checkout in $srcdir to the revision $rev. Now "bzr revno" correctly returns $rev. However, the problem is that this apparently throws away any never revisions from the repository clone in $dir. This means if the user wanted to update from an older revision to a newer revision they would have to download the new revisions from the Internet again. A found a solution, but it would require updating existing packages to use "bzr revno --tree" instead of "bzr revno". 1. bzr checkout "$dir" --lightweight there is no problem in this, but checking out a specified revision is kind of pointless now. 2. bzr update -q -r "$rev" && This updates the working tree to the specified revision. The big advantage of this approach is that it allows the user to move between revisions freely without needing to download everything from the Internet all over again. The problem is that "bzr revno" will still return the newest revision. That's because "bzr revno" returns the revision of the upstream and not the revision of the working directory. To obtain the revision of the working directory one has to use "bzr revno --tree". 3. bzr revert && bzr clean-tree -q --detritus --force Needed only for the incremental updates. "bzr update" does an automatic merge with the changes in the working directory. If there were conflicts, it will mark them in files in the same way as git etc. by adding lines such as >>>> etc. "bzr revert" is used to revert the conflicting files to the repository version. The automatic merge also sometimes leaves some backup files. These are cleaned up using the clean-tree command. TLDR: bazaar is weird and the current implementation of bzr in makepkg is kind of broken. Lukas
On 18/06/14 18:49, Lukas Jirkovsky wrote:
On 6 January 2014 02:35, Allan McRae <allan@archlinux.org> wrote:
The news is I took a look a decided I will need to review the patch when I have a suitable block of time to do so. This will definitely be before the next release.
Allan
What is the status of these patches? I've been using them for git and mercurial PKGBUILDs since they were first posted and I haven't seen a single glitch yet.
They are in my queue (which stands at about 30 patches long...). They will be done before the next pacman release. Given my current work commitments and the upcoming glibc release, it may still take a while.
Also, I wanted to share some of my opinions regarding the implementation of updates with bazaar. I found out that the current implementation doesn't use DVCS features effectively. I will illustrate it on an example on how extract_bzr() works now:
1. bzr checkout "$dir" -r "$rev" --lightweight
this does a lightweight checkout of $dir at a revision rev. "bzr revno" shows the most recent revision (ie. the revision corresponding to the "upstream" dir).
2. bzr pull "$dir" -q --overwrite -r "$rev"
this basically synchronizes $dir and the checkout in $srcdir to the revision $rev. Now "bzr revno" correctly returns $rev. However, the problem is that this apparently throws away any never revisions from the repository clone in $dir. This means if the user wanted to update from an older revision to a newer revision they would have to download the new revisions from the Internet again.
A found a solution, but it would require updating existing packages to use "bzr revno --tree" instead of "bzr revno".
1. bzr checkout "$dir" --lightweight
there is no problem in this, but checking out a specified revision is kind of pointless now.
2. bzr update -q -r "$rev" &&
This updates the working tree to the specified revision. The big advantage of this approach is that it allows the user to move between revisions freely without needing to download everything from the Internet all over again.
The problem is that "bzr revno" will still return the newest revision. That's because "bzr revno" returns the revision of the upstream and not the revision of the working directory. To obtain the revision of the working directory one has to use "bzr revno --tree".
3. bzr revert && bzr clean-tree -q --detritus --force
Needed only for the incremental updates. "bzr update" does an automatic merge with the changes in the working directory. If there were conflicts, it will mark them in files in the same way as git etc. by adding lines such as >>>> etc. "bzr revert" is used to revert the conflicting files to the repository version. The automatic merge also sometimes leaves some backup files. These are cleaned up using the clean-tree command.
TLDR: bazaar is weird and the current implementation of bzr in makepkg is kind of broken.
Send a patch that you think is appropriate. I don't use bzr at all. Allan
On 19 June 2014 02:40, Allan McRae <allan@archlinux.org> wrote:
What is the status of these patches? I've been using them for git and mercurial PKGBUILDs since they were first posted and I haven't seen a single glitch yet.
They are in my queue (which stands at about 30 patches long...). They will be done before the next pacman release. Given my current work commitments and the upcoming glibc release, it may still take a while.
Thank you for the update. I completely understand that the work comes first, and I wish everything works well for you.
Send a patch that you think is appropriate. I don't use bzr at all.
Allan
Will do. I don't use bzr either, but I spent two days trying to figure out how to make it work with incremental updates, finding out about the problems was just a matter of coincidence. Lukas
On 10/12/13 06:31, Lukáš Jirkovský wrote:
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 1421bec..99af551 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -581,9 +581,18 @@ extract_git() {
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "git" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- if ! git clone "$dir"; then + local updating=false
See below.
+ if [[ -d "${dir##*/}" ]]; then + updating=true + cd_safe "${dir##*/}" + if ! git fetch; then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "git" + plain "$(gettext "Aborting...")" + exit 1 + fi + cd_safe "$srcdir" + elif ! git clone "$dir"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 @@ -591,7 +600,7 @@ extract_git() {
cd_safe "${dir##*/}"
- local ref + local ref=origin/HEAD if [[ -n $fragment ]]; then case ${fragment%%=*} in commit|tag) @@ -607,8 +616,8 @@ extract_git() { esac fi
- if [[ -n $ref ]]; then - if ! git checkout -b makepkg $ref; then + if [[ -n $ref ]] || ((updating)) ; then
This always updates given $ref is set to origin/HEAD above. This should be if [[ $ref != "origin/HEAD" ]] || (( updating )); then and further (( updating )) always is false... there is no such thing as doing updating=true in bash. Use 0/1 instead.
+ if ! git checkout --force -B makepkg $ref; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1
I have pulled the patch to my "vcs" branch, which will be pulled when all these are reviewed. One down three to go! A
On 09/11/14 15:14, Allan McRae wrote:
On 10/12/13 06:31, Lukáš Jirkovský wrote:
The local changes are discarded when updating. This matches the behaviour when non-VCS sources are used. It also allows incremental builds.
Signed-off-by: Lukáš Jirkovský <l.jirkovsky@gmail.com> --- scripts/makepkg.sh.in | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/scripts/makepkg.sh.in b/scripts/makepkg.sh.in index 1421bec..99af551 100644 --- a/scripts/makepkg.sh.in +++ b/scripts/makepkg.sh.in @@ -581,9 +581,18 @@ extract_git() {
msg2 "$(gettext "Creating working copy of %s %s repo...")" "${repo}" "git" pushd "$srcdir" &>/dev/null - rm -rf "${dir##*/}"
- if ! git clone "$dir"; then + local updating=false
See below.
+ if [[ -d "${dir##*/}" ]]; then + updating=true + cd_safe "${dir##*/}" + if ! git fetch; then + error "$(gettext "Failure while updating working copy of %s %s repo")" "${repo}" "git" + plain "$(gettext "Aborting...")" + exit 1 + fi + cd_safe "$srcdir" + elif ! git clone "$dir"; then error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1 @@ -591,7 +600,7 @@ extract_git() {
cd_safe "${dir##*/}"
- local ref + local ref=origin/HEAD if [[ -n $fragment ]]; then case ${fragment%%=*} in commit|tag) @@ -607,8 +616,8 @@ extract_git() { esac fi
- if [[ -n $ref ]]; then - if ! git checkout -b makepkg $ref; then + if [[ -n $ref ]] || ((updating)) ; then
This always updates given $ref is set to origin/HEAD above. This should be
if [[ $ref != "origin/HEAD" ]] || (( updating )); then
and further (( updating )) always is false... there is no such thing as doing updating=true in bash. Use 0/1 instead.
+ if ! git checkout --force -B makepkg $ref; then
In addition I have added --no-track here. Given we can no switch between branches or to head, removing tracking prevents weird output.
error "$(gettext "Failure while creating working copy of %s %s repo")" "${repo}" "git" plain "$(gettext "Aborting...")" exit 1
I have pulled the patch to my "vcs" branch, which will be pulled when all these are reviewed.
One down three to go!
A
On 9 November 2014 06:14, Allan McRae <allan@archlinux.org> wrote:
I have pulled the patch to my "vcs" branch, which will be pulled when all these are reviewed.
One down three to go!
A
I see that you already made the changes, so there is no action required from me, am I right? Anyway, thank you for looking into these patches. The new version of pacman is going to be awesome :-) Lukas
On 10/12/13 06:31, Lukáš Jirkovský wrote:
Previously the sources were dowloaded in HEAD revision in the download_svn(). If a specific revision was requested in fragment, the code was updated to that revision in extract_svn(). However, because SVN is a centralized system, this means that the changed sources has to be downloaded again.
By moving the fragment handling to download_svn(), we get the correct revision without having to download it later in extract_svn().
This patch keeps in our download and extract are separate steps style which allow offline building. Pulled to my VCS branch.
participants (7)
-
Allan McRae
-
Bryce Gibson
-
Dave Reisner
-
Ido Rosen
-
Lukas Jirkovsky
-
Lukáš Jirkovský
-
William Giokas