[aur-general] Fwd: please add -depth 1 to makepkg git clone
A rage post with too many typos and not enough punctuation is hard to read.
This is dumb because using cp is not enough, you should be using git clone because it is git and straight from git, if you goal is to just use the newest you are doing it wrong go write you own pkgbuild.
What is not enough? cp has option to reserve everything.
The only reason to use git packages is if you are deving upstream and want to actively test development of upstream packages... Or if up stream is dumb enough to never tag stable releases. Fortunately there are very few of the latter, so to support the majority of of users we clone the whole thing.
I truly do not understand why this conversation exists. We discussed this months ago. The conclusion was that you really shouldn't be using these packages unless you are following upstream... yes, i agree with you. But as a person who commits patches and needs to test, I think using --depth 1 makes initial cloning faster and decreases the load of remote git server. Think about this 100 people clones vlc.git with shadow (around 600mb) vs without shadow (around 10000mb)... its not just about whether you care it or not; please
Please read document for depth=1 and shallow clone. When we create a package, we only need the snapshot at that time; we rarely revert any commit. After we do shallow clone, we can still pull, and remake package. I really dont understand your reason for " to support the majority of of users we clone the whole thing" because shallow clone is sufficient. preserve resources of other projects. On Fri, Apr 5, 2013 at 10:50 PM, Daniel Wallace <danielwallace@gtmanfred.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Tai-Lin Chu <tailinchu@gmail.com> wrote:
1. you can still pull, so it will not be a reason to go against it
This is also true. makepkg clones a bare repository to the SRCDEST directory. If this is a shallow bare repo, then clones cannot be made of it which is what makepkg does.
2. ... i dont think why it needs to use git clone. using cp is good enough...
On Fri, Apr 5, 2013 at 7:05 PM, William Giokas <1007380@gmail.com> wrote:
On Fri, Apr 05, 2013 at 06:47:55PM -0700, Tai-Lin Chu wrote:
makepkg only needs the latest git snapshot there are only 2 cases that wont work: 1. revert git commit 2. count # of revisions
Won't work as you can't clone shallow repositories, which makepkg needs to do to create the working copy for the build.
This is also true. makepkg clones a bare repository to the SRCDEST directory. If this is a shallow bare repo, then clones cannot be made of it which is what makepkg does.
Thanks, -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
I truly do not understand why this conversation exists. We discussed this months ago. The conclusion was that you really shouldn't be using these packages unless you are following upstream... What if the current depth=1 is unstable and has a huge bug.... Then what... Oh those users aren't following upstream and have no idea because instead of the developers of the program monitoring and telling them this a stable (and hopefully bug free release... The use your depth=1 pkgbuild and never update until one of the depends does a soname bump and breaks everything... To save space... In my opinion we should actually remove git packages from the aur... It adds an air of instability to the distribution... People that use -git should really maintain there own pkgbuild... And yes I felt this was so important I interrupted my night out to send...
So, unlike you, I hope depth is never supported in package source=() - -- Sent from my Android Phone. Daniel Wallace Arch Linux Trusted User GTManfred -----BEGIN PGP SIGNATURE----- Version: APG v1.0.8
iQFUBAEBCAA+BQJRX7eRNxxEYW5pZWwgV2FsbGFjZSAoZ3RtYW5mcmVkKSA8ZGFu aWVsLndhbGxhY2VAZ2F0ZWNoLmVkdT4ACgkQX6XlVE8BDUiGAgf/TywmVGiCUD7h gwAa41u43M9v890Suf2Io9x37ptiVJ6vGcwx5iWKJVct8r6TSh0xvavZo3VRW3G3 9mVst9HVSa16tMBXelVMGC2h1dawH2MYi8Jy1/L453133hFTqkIAOKPRX+QF+c5o /8xU3sXog/115n2UiYnn2PymsnYen+xHxdrW4x3nGDJYFFwUlDVq+YmRRFjZrz+/ a2sP5RkK0isv4U3h57bnb2mwQrZscV/IYMUrBmhHhaWZpErBx2EnZlm/PYjrnYEE D+CNmaYoVh7zdKbI+X+1/cY4E2OSSRgwK8IU8gM7mXkw447VjASDf2s/RnyxOPND ENGX7R1V2A== =PNi3 -----END PGP SIGNATURE-----
On 6 April 2013 15:25, Tai-Lin Chu <tailinchu@gmail.com> wrote:
yes, i agree with you. But as a person who commits patches and needs to test, I think using --depth 1 makes initial cloning faster and decreases the load of remote git server. Think about this 100 people clones vlc.git with shadow (around 600mb) vs without shadow (around 10000mb)... its not just about whether you care it or not; please preserve resources of other projects.
I personally like small checkouts. If I am testing software, I don't really need much of its history, and I don't need to be able to commit anything. If I'm developing software, I'll have a separate directory with full checkouts anyway. VCS differences apply, though. There is really no pragmatic difference between copying with cp and exporting (Subversion) or cloning (Git) a VCS repo, except when you don't know what you're doing. If you make changes, a cp may not copy what you intend to copy, or vice-versa with export/clone. IMO, keeping checkouts lean and mean for building experimental packages is a good idea. VCS repos take a lot of space, and in the event you want to maintain package repos with them, you'd like the extra space saved. However, we need equivalent methods for every VCS we care to support ('depth' doesn't mean the same thing in svn, for instance), and we need to provide a mechanism to choose to keep depths (so that you may choose to reuse repos for your own use with full history and what not). -- GPG/PGP ID: C0711BF1
Did any of you read my email? Getting a shallow clone is simply not possible, unless you want to continue to write this boilerplate code. These large transactions are a one-time action, and I see no harm. Thank you, William Giokas On Apr 6, 2013 8:47 AM, "Rashif Ray Rahman" <schiv@archlinux.org> wrote:
On 6 April 2013 15:25, Tai-Lin Chu <tailinchu@gmail.com> wrote:
yes, i agree with you. But as a person who commits patches and needs to test, I think using --depth 1 makes initial cloning faster and decreases the load of remote git server. Think about this 100 people clones vlc.git with shadow (around 600mb) vs without shadow (around 10000mb)... its not just about whether you care it or not; please preserve resources of other projects.
I personally like small checkouts. If I am testing software, I don't really need much of its history, and I don't need to be able to commit anything. If I'm developing software, I'll have a separate directory with full checkouts anyway. VCS differences apply, though.
There is really no pragmatic difference between copying with cp and exporting (Subversion) or cloning (Git) a VCS repo, except when you don't know what you're doing. If you make changes, a cp may not copy what you intend to copy, or vice-versa with export/clone.
IMO, keeping checkouts lean and mean for building experimental packages is a good idea. VCS repos take a lot of space, and in the event you want to maintain package repos with them, you'd like the extra space saved.
However, we need equivalent methods for every VCS we care to support ('depth' doesn't mean the same thing in svn, for instance), and we need to provide a mechanism to choose to keep depths (so that you may choose to reuse repos for your own use with full history and what not).
-- GPG/PGP ID: C0711BF1
On Sat, Apr 06, 2013 at 12:25:37AM -0700, Tai-Lin Chu wrote:
This is dumb because using cp is not enough, you should be using git clone because it is git and straight from git, if you goal is to just use the newest you are doing it wrong go write you own pkgbuild.
What is not enough? cp has option to reserve everything.
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on the extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
The only reason to use git packages is if you are deving upstream and want to actively test development of upstream packages... Or if up stream is dumb enough to never tag stable releases. Fortunately there are very few of the latter, so to support the majority of of users we clone the whole thing.
Please read document for depth=1 and shallow clone. When we create a package, we only need the snapshot at that time; we rarely revert any commit. After we do shallow clone, we can still pull, and remake package. I really dont understand your reason for " to support the majority of of users we clone the whole thing" because shallow clone is sufficient.
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
I truly do not understand why this conversation exists. We discussed this months ago. The conclusion was that you really shouldn't be using these packages unless you are following upstream... yes, i agree with you. But as a person who commits patches and needs to test, I think using --depth 1 makes initial cloning faster and decreases the load of remote git server.
Someone submitting patches and testing should simply link to their development repo in their SRCDEST, seeing as you already have the repo.
Think about this 100 people clones vlc.git with shadow (around 600mb) vs without shadow (around 10000mb)... its not just about whether you care it or not; please preserve resources of other projects.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very terrible servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS. Once again, you can continue to use the worthless biolerplate code from the old vcs PKGBUILDs, but this is pretty much worthless to be honest, and will only fly in the face of readability. Thank you, -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on the extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
cp and git clone are exactly the same. see cp source code, and if the file is corrupted, then you have even bigger problems. In general very not likely. (i mean if this happen, 1. kernel has problem 2. your disk goes bad) stackoverflow confirmed the result. http://stackoverflow.com/questions/852561/is-it-safe-to-use-a-copied-git-rep...
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
If they're all doing it at the same time, cloning fresh repositories,
aren't we talking about cp....? then yes. that may be an issue on some large projects with very terrible servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS. I dont have a server, and this is not practical. certainly using git pkgbuild with shallow clone is far easier than what you mentioned. On Sat, Apr 6, 2013 at 10:08 AM, William Giokas <1007380@gmail.com> wrote:
On Sat, Apr 06, 2013 at 12:25:37AM -0700, Tai-Lin Chu wrote:
This is dumb because using cp is not enough, you should be using git clone because it is git and straight from git, if you goal is to just use the newest you are doing it wrong go write you own pkgbuild.
What is not enough? cp has option to reserve everything.
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on the extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
The only reason to use git packages is if you are deving upstream and want to actively test development of upstream packages... Or if up stream is dumb enough to never tag stable releases. Fortunately there are very few of the latter, so to support the majority of of users we clone the whole thing.
Please read document for depth=1 and shallow clone. When we create a package, we only need the snapshot at that time; we rarely revert any commit. After we do shallow clone, we can still pull, and remake package. I really dont understand your reason for " to support the majority of of users we clone the whole thing" because shallow clone is sufficient.
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
I truly do not understand why this conversation exists. We discussed this months ago. The conclusion was that you really shouldn't be using these packages unless you are following upstream... yes, i agree with you. But as a person who commits patches and needs to test, I think using --depth 1 makes initial cloning faster and decreases the load of remote git server.
Someone submitting patches and testing should simply link to their development repo in their SRCDEST, seeing as you already have the repo.
Think about this 100 people clones vlc.git with shadow (around 600mb) vs without shadow (around 10000mb)... its not just about whether you care it or not; please preserve resources of other projects.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very terrible servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS.
Once again, you can continue to use the worthless biolerplate code from the old vcs PKGBUILDs, but this is pretty much worthless to be honest, and will only fly in the face of readability.
Thank you, -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On Sat, Apr 06, 2013 at 11:10:52AM -0700, Tai-Lin Chu wrote:
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on the extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
cp and git clone are exactly the same. see cp source code, and if the file is corrupted, then you have even bigger problems. In general very not likely. (i mean if this happen, 1. kernel has problem 2. your disk goes bad) stackoverflow confirmed the result. http://stackoverflow.com/questions/852561/is-it-safe-to-use-a-copied-git-rep...
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
aren't we talking about cp....?
Here, run this quick script and see what you can do with it: #!/bin/bash mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose --bare git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done" If you look at the one generated by the 'cp' command you will see that it is totally missing the actual files, and only contains (duh) the bare repository files. This is utterly worthless for building, and also, if there is disk failure, makepkg will still try to build. Looking into the one generated by the git clone, you'll see that it has all of the correct files and can actually be built.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very terrible servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS.
I dont have a server, and this is not practical. certainly using git pkgbuild with shallow clone is far easier than what you mentioned.
Not at all. See the script above. !next -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
...what are you trying to test? mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done" test this. of course --bare will give you different result... ok here is a normal process that i mentioned: git clone --depth 1 git://github.com/falconindy/cower.git test cp -a -r test test2 // build with test2 rm -rf test2 On Sat, Apr 6, 2013 at 11:25 AM, William Giokas <1007380@gmail.com> wrote:
On Sat, Apr 06, 2013 at 11:10:52AM -0700, Tai-Lin Chu wrote:
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on the extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
cp and git clone are exactly the same. see cp source code, and if the file is corrupted, then you have even bigger problems. In general very not likely. (i mean if this happen, 1. kernel has problem 2. your disk goes bad) stackoverflow confirmed the result. http://stackoverflow.com/questions/852561/is-it-safe-to-use-a-copied-git-rep...
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
aren't we talking about cp....?
Here, run this quick script and see what you can do with it:
#!/bin/bash mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose --bare git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
If you look at the one generated by the 'cp' command you will see that it is totally missing the actual files, and only contains (duh) the bare repository files. This is utterly worthless for building, and also, if there is disk failure, makepkg will still try to build.
Looking into the one generated by the git clone, you'll see that it has all of the correct files and can actually be built.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very terrible servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS.
I dont have a server, and this is not practical. certainly using git pkgbuild with shallow clone is far easier than what you mentioned.
Not at all. See the script above.
!next -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On Sat, Apr 6, 2013 at 3:11 PM, Tai-Lin Chu <tailinchu@gmail.com> wrote:
...what are you trying to test?
Probably trying to replicate what makepkg does to show you why cp doesn't work.
mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
test this. of course --bare will give you different result...
And this is what makepkg uses so that the base repo takes up less space on disk and the tree doesn't need to be calculated.
ok here is a normal process that i mentioned:
git clone --depth 1 git://github.com/falconindy/cower.git test cp -a -r test test2 // build with test2 rm -rf test2
Please go back and search the pacman-dev list for why we aren't doing this -- it's clear that you posted the suggestion here before doing any amount of investigation into this. You aren't the first to suggest this, and you unfortunately won't be the last.
On Sat, Apr 06, 2013 at 11:10:52AM -0700, Tai-Lin Chu wrote:
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on
On Sat, Apr 6, 2013 at 11:25 AM, William Giokas <1007380@gmail.com> wrote: the
extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
cp and git clone are exactly the same. see cp source code, and if the file is corrupted, then you have even bigger problems. In general very not likely. (i mean if this happen, 1. kernel has problem 2. your disk goes bad) stackoverflow confirmed the result.
http://stackoverflow.com/questions/852561/is-it-safe-to-use-a-copied-git-rep...
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
aren't we talking about cp....?
Here, run this quick script and see what you can do with it:
#!/bin/bash mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose --bare git://github.com/falconindy/cower.gitbarerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
If you look at the one generated by the 'cp' command you will see that it is totally missing the actual files, and only contains (duh) the bare repository files. This is utterly worthless for building, and also, if there is disk failure, makepkg will still try to build.
Looking into the one generated by the git clone, you'll see that it has all of the correct files and can actually be built.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very
terrible
servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS.
I dont have a server, and this is not practical. certainly using git pkgbuild with shallow clone is far easier than what you mentioned.
Not at all. See the script above.
!next -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
@dave I still cannot find any info regarding why we should not use depth 1. do you mind pasting the link here? thanks. On Sat, Apr 6, 2013 at 12:15 PM, Dave Reisner <d@falconindy.com> wrote:
On Sat, Apr 6, 2013 at 3:11 PM, Tai-Lin Chu <tailinchu@gmail.com> wrote:
...what are you trying to test?
Probably trying to replicate what makepkg does to show you why cp doesn't work.
mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
test this. of course --bare will give you different result...
And this is what makepkg uses so that the base repo takes up less space on disk and the tree doesn't need to be calculated.
ok here is a normal process that i mentioned:
git clone --depth 1 git://github.com/falconindy/cower.git test cp -a -r test test2 // build with test2 rm -rf test2
Please go back and search the pacman-dev list for why we aren't doing this -- it's clear that you posted the suggestion here before doing any amount of investigation into this. You aren't the first to suggest this, and you unfortunately won't be the last.
On Sat, Apr 06, 2013 at 11:10:52AM -0700, Tai-Lin Chu wrote:
Doesn't matter. cp does nothing with checksums, whereas git will preserve every byte, and it literally can't go bad (or if it does on
On Sat, Apr 6, 2013 at 11:25 AM, William Giokas <1007380@gmail.com> wrote: the
extremely off chance, it will simply stop the build). Maybe rsync, you say? That still isn't cryptographically secure. Using git, you can guarantee that the files you are building from are exactly the same as anyone else, which is what we want with makepkg.
cp and git clone are exactly the same. see cp source code, and if the file is corrupted, then you have even bigger problems. In general very not likely. (i mean if this happen, 1. kernel has problem 2. your disk goes bad) stackoverflow confirmed the result.
http://stackoverflow.com/questions/852561/is-it-safe-to-use-a-copied-git-rep...
There's minimal point to this. As I've said numerous times, it does not allow you to clone the shallow bare repo, which is what makepkg gets when it fetches git sources.
aren't we talking about cp....?
Here, run this quick script and see what you can do with it:
#!/bin/bash mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose --bare git://github.com/falconindy/cower.gitbarerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
If you look at the one generated by the 'cp' command you will see that it is totally missing the actual files, and only contains (duh) the bare repository files. This is utterly worthless for building, and also, if there is disk failure, makepkg will still try to build.
Looking into the one generated by the git clone, you'll see that it has all of the correct files and can actually be built.
If they're all doing it at the same time, cloning fresh repositories, then yes. that may be an issue on some large projects with very
terrible
servers. Also, if you're worried about server load, mirror the repository yourself so people can gake the load off of the host server. This is the joy of a DVCS.
I dont have a server, and this is not practical. certainly using git pkgbuild with shallow clone is far easier than what you mentioned.
Not at all. See the script above.
!next -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On Sat, Apr 06, 2013 at 12:26:14PM -0700, Tai-Lin Chu wrote:
@dave I still cannot find any info regarding why we should not use depth 1. do you mind pasting the link here? thanks.
https://mailman.archlinux.org/pipermail/pacman-dev/2012-March.txt Now curl and grep are your friends...
On Sat, Apr 6, 2013 at 12:15 PM, Dave Reisner <d@falconindy.com> wrote:
On Sat, Apr 6, 2013 at 3:11 PM, Tai-Lin Chu <tailinchu@gmail.com> wrote:
...what are you trying to test?
Probably trying to replicate what makepkg does to show you why cp doesn't work.
mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
test this. of course --bare will give you different result...
And this is what makepkg uses so that the base repo takes up less space on disk and the tree doesn't need to be calculated.
ok here is a normal process that i mentioned:
git clone --depth 1 git://github.com/falconindy/cower.git test cp -a -r test test2 // build with test2 rm -rf test2
Please go back and search the pacman-dev list for why we aren't doing this -- it's clear that you posted the suggestion here before doing any amount of investigation into this. You aren't the first to suggest this, and you unfortunately won't be the last.
-- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
Thanks William. But I dont think the discussion is deep as what we have right now. Apparently allan miscalculates how much we can save on using shallow clone. I tried linux/master yesterday, it is more like 600mb to 97mb. On Sat, Apr 6, 2013 at 4:46 PM, William Giokas <1007380@gmail.com> wrote:
On Sat, Apr 06, 2013 at 12:26:14PM -0700, Tai-Lin Chu wrote:
@dave I still cannot find any info regarding why we should not use depth 1. do you mind pasting the link here? thanks.
https://mailman.archlinux.org/pipermail/pacman-dev/2012-March.txt
Now curl and grep are your friends...
On Sat, Apr 6, 2013 at 12:15 PM, Dave Reisner <d@falconindy.com> wrote:
On Sat, Apr 6, 2013 at 3:11 PM, Tai-Lin Chu <tailinchu@gmail.com> wrote:
...what are you trying to test?
Probably trying to replicate what makepkg does to show you why cp doesn't work.
mkdir -p /tmp/dumb/ pushd /tmp/dumb/ echo "==> Cloning into a bare repository..." git clone --verbose git://github.com/falconindy/cower.git barerepo echo "==> Creating copy of this repo using cp..." cp -r -a /tmp/dumb/barerepo /tmp/dumb/barecp echo "==> Done" echo "==> Creating copy of this repo using git clone..." git clone --verbose /tmp/dumb/barerepo barerepocopy echo "==> Done"
test this. of course --bare will give you different result...
And this is what makepkg uses so that the base repo takes up less space on disk and the tree doesn't need to be calculated.
ok here is a normal process that i mentioned:
git clone --depth 1 git://github.com/falconindy/cower.git test cp -a -r test test2 // build with test2 rm -rf test2
Please go back and search the pacman-dev list for why we aren't doing this -- it's clear that you posted the suggestion here before doing any amount of investigation into this. You aren't the first to suggest this, and you unfortunately won't be the last.
-- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On Sat, Apr 06, 2013 at 05:40:40PM -0700, Tai-Lin Chu wrote:
Thanks William. But I dont think the discussion is deep as what we have right now. Apparently allan miscalculates how much we can save on using shallow clone. I tried linux/master yesterday, it is more like 600mb to 97mb.
Okay, if you're going to be doing shallow clones, you may as well just get the dang tarballs. This is totally flying in the face of what the -git packages really are, development packages. Once you download it, you never have to again, just update it. If you've got a problem with the size, then somehow get a physical copy of it, or just take the time to get a revision, then keep it updated gradually. Thanks, -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
Okay, if you're going to be doing shallow clones, you may as well just get the dang tarballs. This is totally flying in the face of what the -git packages really are, development packages. Once you download it, you never have to again, just update it. If you've got a problem with the size, then somehow get a physical copy of it, or just take the time to get a revision, then keep it updated gradually.
no. because there is no snapshot tarball available all the time, we use git clone. On Sat, Apr 6, 2013 at 5:57 PM, William Giokas <1007380@gmail.com> wrote:
On Sat, Apr 06, 2013 at 05:40:40PM -0700, Tai-Lin Chu wrote:
Thanks William. But I dont think the discussion is deep as what we have right now. Apparently allan miscalculates how much we can save on using shallow clone. I tried linux/master yesterday, it is more like 600mb to 97mb.
Okay, if you're going to be doing shallow clones, you may as well just get the dang tarballs. This is totally flying in the face of what the -git packages really are, development packages. Once you download it, you never have to again, just update it. If you've got a problem with the size, then somehow get a physical copy of it, or just take the time to get a revision, then keep it updated gradually.
Thanks, -- William Giokas | KaiSforza GnuPG Key: 0x73CD09CF Fingerprint: F73F 50EF BBE2 9846 8306 E6B8 6902 06D8 73CD 09CF
On 7 April 2013 09:19, Tai-Lin Chu <tailinchu@gmail.com> wrote:
Okay, if you're going to be doing shallow clones, you may as well just get the dang tarballs. This is totally flying in the face of what the -git packages really are, development packages. Once you download it, you never have to again, just update it. If you've got a problem with the size, then somehow get a physical copy of it, or just take the time to get a revision, then keep it updated gradually.
no. because there is no snapshot tarball available all the time, we use git clone.
Before this goes on indefinitely, know that aur-general is not the right medium for this sort of arguments. Better discussion based on technical merit can be had through the pacman-dev mailing list or our bugtracker. If you have a case, let the right people judge it. -- GPG/PGP ID: C0711BF1
participants (4)
-
Dave Reisner
-
Rashif Ray Rahman
-
Tai-Lin Chu
-
William Giokas