[arch-general] Default value of "j" in makeflags of makepkg.conf
Salutations, I was wondering if we could change the default value for j in MakeFlags in makepkg.conf to "-j$(nproc)". This would allow makepkg to scale the number of threads per pc as default. Regards, Mark -- Mark Lee <mark@markelee.com>
I would advise against doing that, considering that there are at least a handful of packages (can't name them) that have broken or otherwise malfunctioning Makefiles when run in parallel. The package maintainers _should_ be aware of those issues, and would accordingly add 'options="!makeflags"' to their PKGBUILD, but not everyone has a multiple core computer, nor did everyone test the functionality (there could be cases where -j4 runs fine, but -j8 crashes, or other weird race conditions). Adding your line as a mention in the makepkg.conf file (as a comment) could be a great idea, as it is definitely a clever way of automatically setting the number of parallel instances. #-- Make Flags: change this for DistCC/SMP systems # MAKEFLAGS="-j$(nprocs)" will set it to the number of logical cores on your system. MAKEFLAGS="-j8" On 30 December 2013 22:24, Mark Lee <mark@markelee.com> wrote:
Salutations,
I was wondering if we could change the default value for j in MakeFlags in makepkg.conf to "-j$(nproc)". This would allow makepkg to scale the number of threads per pc as default.
Regards, Mark -- Mark Lee <mark@markelee.com>
-- Sébastien Leblanc
Am 31.12.2013 07:51, schrieb Sébastien Leblanc:
I would advise against doing that, considering that there are at least a handful of packages (can't name them) that have broken or otherwise malfunctioning Makefiles when run in parallel. The package maintainers _should_ be aware of those issues, and would accordingly add 'options="!makeflags"' to their PKGBUILD, but not everyone has a multiple core computer,
Really? Who?
nor did everyone test the functionality (there could be cases where -j4 runs fine, but -j8 crashes, or other weird race conditions).
You are suggesting not changing to a sane default because some packages (especially in the AUR) have crappy maintainers. That's hardly a reason for anything.
On Tue, 31 Dec 2013 19:39:03 +0100 Thomas Bächler <thomas@archlinux.org> wrote:
Really? Who?
Hmm, me. Intel atom here...
You are suggesting not changing to a sane default because some packages (especially in the AUR) have crappy maintainers. That's hardly a reason for anything.
A sane default would probably be $(nproc)-1. But in general, is it a good idea to have calls to binaries in a config file? So far, makepkg.conf doesn't have anything like this. Happy new year, Leonid. -- Leonid Isaev GnuPG key: 0x164B5A6D Fingerprint: C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
On Tue, 31 Dec 2013 19:39:03 +0100 Thomas Bächler <thomas@archlinux.org> wrote:
Am 31.12.2013 07:51, schrieb Sébastien Leblanc:
I would advise against doing that, considering that there are at least a handful of packages (can't name them) that have broken or otherwise malfunctioning Makefiles when run in parallel. The package maintainers _should_ be aware of those issues, and would accordingly add 'options="!makeflags"' to their PKGBUILD, but not everyone has a multiple core computer,
Really? Who?
nor did everyone test the functionality (there could be cases where -j4 runs fine, but -j8 crashes, or other weird race conditions).
You are suggesting not changing to a sane default because some packages (especially in the AUR) have crappy maintainers. That's hardly a reason for anything.
Your defenition of sane default might not match someone elses. Many people prefer their box to not slow to a crawl just because they started makepkg :)
Am 03.01.2014 15:03, schrieb Øyvind Heggstad:
You are suggesting not changing to a sane default because some packages (especially in the AUR) have crappy maintainers. That's hardly a reason for anything.
Your defenition of sane default might not match someone elses.
Many people prefer their box to not slow to a crawl just because they started makepkg :)
Again, we're not changing to a sane default because some people are unable to use their machines properly? 'nice' works just fine, and I haven't seen machines slow down due to compiling even without it - the Linux scheduler handles many simultaneous workloads just fine.
On Fri, Jan 3, 2014 at 3:16 PM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 03.01.2014 15:03, schrieb Øyvind Heggstad:
You are suggesting not changing to a sane default because some packages (especially in the AUR) have crappy maintainers. That's hardly a reason for anything.
Your defenition of sane default might not match someone elses.
Many people prefer their box to not slow to a crawl just because they started makepkg :)
Again, we're not changing to a sane default because some people are unable to use their machines properly?
Actually, debugging a build that breaks because the build system can't handle -j is really a giant WTF point. I once was there, and after hours into code and logs I found that -j was the issue. Please do not introduce it. Thanks. You can't expect every upstream to fix their autohell to conform to our expectations here. cheers! mar77i
Am 03.01.2014 15:21, schrieb Martti Kühne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves. There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
On Fri, Jan 3, 2014 at 3:23 PM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 03.01.2014 15:21, schrieb Martti Kühne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves.
Because I have a strong opinion about this. Also to prevent people from running into this who are not that experienced in making things work.
There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
It's not nice to introduce this, then people start packaging some new piece of software they want to throw on the aur, but which no one cared to build with -j yet and they would check their build trees again and again and spent amounts of time on this similar to me to figure out what was going on - all while a manual build just worked for them. Garbage error messages, huge autohell pains, and all because of mr. brain0. cheers! mar77i
Am 03.01.2014 15:33, schrieb Martti Kühne:
On Fri, Jan 3, 2014 at 3:23 PM, Thomas Bächler <thomas@archlinux.org> wrote:
Am 03.01.2014 15:21, schrieb Martti Kühne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves.
Because I have a strong opinion about this. Also to prevent people from running into this who are not that experienced in making things work.
If you are not "experienced", you should think about your operating system choice. We are not a kindergarten, we are a distribution with a target audience of experienced and advanced users.x
There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
It's not nice to introduce this, then people start packaging some new piece of software they want to throw on the aur,
If it were my choice, we would enforce high quality standards for the AUR (which would likely force us to delete 90% of PKGBUILDs from it). If you just want to "throw a piece of software on the AUR" without checking the PKGBUILD for compliance with expected quality standards and correctness, then fuck you. I will not stand by while we encourage people to continue to produce low-quality bullshit and upload it to the AUR.
but which no one cared to build with -j yet and they would check their build trees again and again and spent amounts of time on this similar to me to figure out what was going on - all while a manual build just worked for them.
All I hear is whining because you were unaware of a very common issue. As a package maintainer or AUR maintainer, it is your duty to test whether the build causes problems with -j (and you don't even need a multi-core machine to do that).
Garbage error messages, huge autohell pains, and all because of mr. brain0.
So, now I am responsible for every low-quality piece of shit software that is being published. You are giving me way too much credit.
On Friday 03 Jan 2014 15:49:27 Thomas Bächler wrote:
If you are not "experienced", you should think about your operating system choice. We are not a kindergarten, we are a distribution with a target audience of experienced and advanced users.x
I reckon plenty of Arch users weren't used to customising or building packages before they came to Arch, and may not have done all that much building from source. That's not unreasonable.
If it were my choice, we would enforce high quality standards for the AUR (which would likely force us to delete 90% of PKGBUILDs from it).
Sounds like a barrel of laughs!
If you just want to "throw a piece of software on the AUR" without checking the PKGBUILD for compliance with expected quality standards and correctness, then fuck you. I will not stand by while we encourage people to continue to produce low-quality bullshit and upload it to the AUR.
So just to get this straight: if someone's going to become a packager, they need to learn how to do it *instantly*. There should be no margin for error and inexperience. Correct? Have you considered a career in education? :p Paul
Am 03.01.2014 16:11, schrieb Paul Gideon Dann:
If it were my choice, we would enforce high quality standards for the AUR (which would likely force us to delete 90% of PKGBUILDs from it).
Sounds like a barrel of laughs!
When I look at the AUR sometimes, I don't laugh - I cry. The current state of the AUR makes Arch Linux look like a very bad joke.
So just to get this straight: if someone's going to become a packager, they need to learn how to do it *instantly*.
They should first learn it, and then start uploading to the AUR. In particular, you should not upload to the AUR instantly.
There should be no margin for error and inexperience.
Error and inexperience can occur while learning. If you upload to the AUR, I expect you to have polished and finished material, not your first draft. There's enough places with kind people who will look at your PKGBUILD and point out your mistakes. The AUR isn't one of those places.
On Friday 03 Jan 2014 16:26:27 Thomas Bächler wrote:
Error and inexperience can occur while learning. If you upload to the AUR, I expect you to have polished and finished material, not your first draft.
There's enough places with kind people who will look at your PKGBUILD and point out your mistakes. The AUR isn't one of those places.
If these packages actually are "first drafts", then yeah, people are not actually taking their packages seriously, and maybe the AUR would benefit from a scratchbox area. However, my experience is that many people respond well when I point out the mistakes they've made. Many people are happy with "it works", and fail to see the purpose of going out of their way to ensure their work is "correct". Such people benefit from the input of those of us who for whom this comes more naturally. Bottom line: many people who own packages on the AUR simply want to install the software, with the minimum amount of effort, and thought it would be nice to share their hacked-together PKGBUILD with others to save them the trouble. If we want to enforce package correctness and good practice, we need more automation to flag those sorts of issues. Paul
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 01/03/2014 09:26 AM, Thomas Bächler wrote:
Error and inexperience can occur while learning. If you upload to the AUR, I expect you to have polished and finished material, not your first draft.
There's enough places with kind people who will look at your PKGBUILD and point out your mistakes. The AUR isn't one of those places.
What I've noticed is many times "error and inexperience" are not what break PKGBUILDs in AUR, rather it is rapid change in both the source and in Arch, that render what was a correct PKGBUILD broken almost overnight. I agree with TB that what gets uploaded to AUR should be a correct finished product, at the same time we shouldn't, in hindsight, beat up on packagers when breaks are the result of either source changes or Arch changes. This is especially true when building large source projects where Arch is several major versions ahead of what many contributors use (gcc, etc..). The past year has seen incredible changes in Arch, as well as in many packages, and as a result I've noticed that many packages I need from AUR need to be updated. Hopefully going forward we won't be changing from systemd to something else or moving chunks of the filesystem around for a while which should allow many of these problems to work themselves out :-) - -- David C. Rankin, J.D.,P.E. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlLKArAACgkQZMpuZ8CyrciZ7wCfQIjMCXZX3TrtxSpvzqWwS4oZ UUsAniz+DOthsldJY2Ob93bII/0Xn57W =Ph03 -----END PGP SIGNATURE-----
Hi
'nice' works just fine, and I haven't seen machines slow down due to compiling even without it - the Linux scheduler handles many simultaneous workloads just fine.
Thats right. Linux kernel developers spent a lot of time to optimize scheduler both for long batch jobs (like compilation) and for interactive jobs (like UI). Briefly - if a process spends a lot of CPU then its priority decreased. So compilation with a few threads should be almost invisible for users. As of me - I always set -j to $(cpunum) on all my Arch machines. I prefer to save time on compilation rather than worry about "package might be broken because of crappy PKGBUILD". I've compiled hundreds AUR packages and none of them got broken because of -jN so far. And if the project cannot be compiled because of -jN then it most likely means incorrect dependencies in the Makefile. In fact it can break even in case of single-thread compilation e.g. if gmake decide to change compilation order in a future version. Using -j$(cpunum) is a sane default that saves a lot of time to users. I do not see why anyone should use something different. If PKGBUILD is broken then (!makeflags) should be used and ideally the issue should be reported upstream and resolved there (i.e. fix the Makefile dependencies).
On 01/03/2014 10:37 AM, Anatol Pomozov wrote:
Using -j$(cpunum) is a sane default that saves a lot of time to users.
I agree, but for the record, 'nice' and scheduling are no panacea in my experience. It's fine for CPU loads, but compilations are also disk-heavy (which mattered when I used a spinning disk) and sometimes RAM-heavy (-j8 on my pet C++ project uses 2GB of RAM, which could push other programs I'm using into swap). -Isaac
On Fri, Jan 3, 2014 at 9:55 PM, Isaac Dupree <ml@isaac.cedarswampstudios.org> wrote:
On 01/03/2014 10:37 AM, Anatol Pomozov wrote:
Using -j$(cpunum) is a sane default that saves a lot of time to users.
I agree, but for the record, 'nice' and scheduling are no panacea in my experience. It's fine for CPU loads, but compilations are also disk-heavy (which mattered when I used a spinning disk) and sometimes RAM-heavy (-j8 on my pet C++ project uses 2GB of RAM, which could push other programs I'm using into swap).
-Isaac
Have you tried using ionice?
On 01/03/2014 04:10 PM, Karol Blazewicz wrote:
On Fri, Jan 3, 2014 at 9:55 PM, Isaac Dupree <ml@isaac.cedarswampstudios.org> wrote:
On 01/03/2014 10:37 AM, Anatol Pomozov wrote:
Using -j$(cpunum) is a sane default that saves a lot of time to users.
I agree, but for the record, 'nice' and scheduling are no panacea in my experience. It's fine for CPU loads, but compilations are also disk-heavy (which mattered when I used a spinning disk) and sometimes RAM-heavy (-j8 on my pet C++ project uses 2GB of RAM, which could push other programs I'm using into swap).
-Isaac
Have you tried using ionice?
I think I tried ionice once, but now I have an SSD that's fast enough that I wouldn't be able to tell the difference. Does it work well for you? -Isaac
On Fri, Jan 3, 2014 at 10:23 PM, Isaac Dupree <ml@isaac.cedarswampstudios.org> wrote:
On 01/03/2014 04:10 PM, Karol Blazewicz wrote:
On Fri, Jan 3, 2014 at 9:55 PM, Isaac Dupree <ml@isaac.cedarswampstudios.org> wrote:
On 01/03/2014 10:37 AM, Anatol Pomozov wrote:
Using -j$(cpunum) is a sane default that saves a lot of time to users.
I agree, but for the record, 'nice' and scheduling are no panacea in my experience. It's fine for CPU loads, but compilations are also disk-heavy (which mattered when I used a spinning disk) and sometimes RAM-heavy (-j8 on my pet C++ project uses 2GB of RAM, which could push other programs I'm using into swap).
-Isaac
Have you tried using ionice?
I think I tried ionice once, but now I have an SSD that's fast enough that I wouldn't be able to tell the difference. Does it work well for you?
-Isaac
I have ancient hardware and I don't really compile much. I can let my computer do its thing while I prepare dinner.
On Fri, 03 Jan 2014 15:49:27 +0100 Thomas Bächler <thomas@archlinux.org> wrote:
If it were my choice, we would enforce high quality standards for the AUR (which would likely force us to delete 90% of PKGBUILDs from it).
Speaking of which: https://bbs.archlinux.org/viewtopic.php?id=175171
On Friday 03 Jan 2014 15:33:05 Martti Kühne wrote:
Because I have a strong opinion about this. Also to prevent people from running into this who are not that experienced in making things work.
If someone makes more than a few packages, they will have encountered makepkg.conf, to at least set their e-mail address. When I started using Arch, I think I discovered makepkg.conf and added the -j to makeflags pretty much on day one of experimenting with PKGBUILDs. But I think it comes down to this: 1) If someone knows that the -j flag exists, it won't take them long to figure out how to add it to makeflags, and then the responsibility is with them to ensure they know it can (rarely!) break some builds. 2) If the -j flag is added by default, builds may break unpredictably, and users will not know why. They may not be aware of -j, and may not make the connection to makepkg.conf at all. Option 1 seems a safer default to me. However, I think this should be properly documented in makepkg.conf: there should be an actual suggestion to add -j, along with a warning that in rare cases it may cause breakage. Just a single-line comment, possibly with a link to the wiki, would be enough. Paul
Hi On Fri, Jan 3, 2014 at 6:55 AM, Paul Gideon Dann <pdgiddie@gmail.com> wrote:
On Friday 03 Jan 2014 15:33:05 Martti Kühne wrote:
Because I have a strong opinion about this. Also to prevent people from running into this who are not that experienced in making things work.
If someone makes more than a few packages, they will have encountered makepkg.conf, to at least set their e-mail address. When I started using Arch, I think I discovered makepkg.conf and added the -j to makeflags pretty much on day one of experimenting with PKGBUILDs. But I think it comes down to this:
1) If someone knows that the -j flag exists, it won't take them long to figure out how to add it to makeflags, and then the responsibility is with them to ensure they know it can (rarely!) break some builds.
2) If the -j flag is added by default, builds may break unpredictably, and users will not know why. They may not be aware of -j, and may not make the connection to makepkg.conf at all.
Option 1 seems a safer default to me. However, I think this should be properly documented in makepkg.conf: there should be an actual suggestion to add -j, along with a warning that in rare cases it may cause breakage. Just a single-line comment, possibly with a link to the wiki, would be enough.
But there always will be people who uses -jN (e.g. me). If we decide to keep broken PKGBUILD in AUR forever then it means sooner or later -jN people will be hit by this issues. So the choice is really: 1) Keep the broken packages forever and care only about -j1 people (who is majority now). 2) Make -jN by default. It will speedup compilation but it also make the broken packages more visible. IMHO #2 is better. It is better to highlight all the broken PKGBUILD and fix it thus make it working for everyone.
On Fri, 3 Jan 2014 08:03:33 -0800 Anatol Pomozov <anatol.pomozov@gmail.com> wrote:
Hi
On Fri, Jan 3, 2014 at 6:55 AM, Paul Gideon Dann <pdgiddie@gmail.com> wrote:
On Friday 03 Jan 2014 15:33:05 Martti Kühne wrote:
Because I have a strong opinion about this. Also to prevent people from running into this who are not that experienced in making things work.
If someone makes more than a few packages, they will have encountered makepkg.conf, to at least set their e-mail address. When I started using Arch, I think I discovered makepkg.conf and added the -j to makeflags pretty much on day one of experimenting with PKGBUILDs. But I think it comes down to this:
1) If someone knows that the -j flag exists, it won't take them long to figure out how to add it to makeflags, and then the responsibility is with them to ensure they know it can (rarely!) break some builds.
2) If the -j flag is added by default, builds may break unpredictably, and users will not know why. They may not be aware of -j, and may not make the connection to makepkg.conf at all.
Option 1 seems a safer default to me. However, I think this should be properly documented in makepkg.conf: there should be an actual suggestion to add -j, along with a warning that in rare cases it may cause breakage. Just a single-line comment, possibly with a link to the wiki, would be enough.
But there always will be people who uses -jN (e.g. me). If we decide to keep broken PKGBUILD in AUR forever then it means sooner or later -jN people will be hit by this issues. So the choice is really:
1) Keep the broken packages forever and care only about -j1 people (who is majority now). 2) Make -jN by default. It will speedup compilation but it also make the broken packages more visible.
IMHO #2 is better. It is better to highlight all the broken PKGBUILD and fix it thus make it working for everyone.
Why is the default "-j" such a big deal? IMHO, the way things are currently is OK. You want to speedup compilation -- there is an option for that. FWIW, if you compile >1 package, you'll have to change lots of things in makepkg.conf besides makeflags (sign, packager info, CFLAGS) anyways. Cheers, -- Leonid Isaev GnuPG key: 0x164B5A6D Fingerprint: C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
On Jan 3, 2014 11:23 AM, "Leonid Isaev" <lisaev@umail.iu.edu> wrote:
Why is the default "-j" such a big deal?
It really isn't a big deal. While I think we should leave it unset so the user can set it their self, most of the packages in the AUR that have an issue with - j > 1 already have a work around in place. At least that's my experience.
On Fri, Jan 03, 2014 at 03:23:24PM +0100, Thomas B?chler wrote:
Am 03.01.2014 15:21, schrieb Martti K?hne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves.
There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
netbsd / pkgsrc did switch to a more concurrent default for $MAKE_JOBS. MAKE_JOBS_SAFE=no is a way to turn it off. In current 'stable' pkgsrc, 590 / 11862 packages have it set (to no, i.e., not parallel build safe). Each predates someone running into the pkg not building for them while it built for others. You wanna find those inexplicably not building on some machines manually, again? Have fun.
On 04/01/14 01:03, Martin S. Weber wrote:
On Fri, Jan 03, 2014 at 03:23:24PM +0100, Thomas B?chler wrote:
Am 03.01.2014 15:21, schrieb Martti K?hne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves.
There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
netbsd / pkgsrc did switch to a more concurrent default for $MAKE_JOBS.
MAKE_JOBS_SAFE=no is a way to turn it off.
In current 'stable' pkgsrc, 590 / 11862 packages have it set (to no, i.e., not parallel build safe).
Each predates someone running into the pkg not building for them while it built for others.
You wanna find those inexplicably not building on some machines manually, again?
Have fun.
Why would it need done manually? You have already found us a list! Allan
On Sat, Jan 04, 2014 at 01:07:47AM +1000, Allan McRae wrote:
On 04/01/14 01:03, Martin S. Weber wrote:
On Fri, Jan 03, 2014 at 03:23:24PM +0100, Thomas B?chler wrote:
Am 03.01.2014 15:21, schrieb Martti K?hne:
You can't expect every upstream to fix their autohell to conform to our expectations here.
So, we keep repeating ourselves.
There is the !makeflags option for PKGBUILDs to work around this problem (which you would know if you read the thread). If a package is broken with -j, this option helps.
netbsd / pkgsrc did switch to a more concurrent default for $MAKE_JOBS.
MAKE_JOBS_SAFE=no is a way to turn it off.
In current 'stable' pkgsrc, 590 / 11862 packages have it set (to no, i.e., not parallel build safe).
Each predates someone running into the pkg not building for them while it built for others.
You wanna find those inexplicably not building on some machines manually, again?
Have fun.
Why would it need done manually? You have already found us a list!
because for each new occurrence, it will have to be determined manually: concurrency brings non-determinism with it. Many of these pkgs built just fine (tm) for developers a, b and d (not only on, but also on multi-core and/or SMP machines) while it didn't for devs c, f, g, and, much worse, for users u, y and z. I mean, feel free to learn the sane default for (said 590) pkgs from pkgsrc, or consider the process that spans from 2007 until now, where pkgs still are flagged MAKE_JOBS_SAFE after the first user has run into them not building. Also, not each pacman pkg has a 1:1 mirror candidate in pkgsrc. IMHO, should make you pause and (re)consider for a moment. Kind Regards, -Martin
On 03/01/2014 16:24, Martin S. Weber wrote:
because for each new occurrence, it will have to be determined manually: concurrency brings non-determinism with it. Many of these pkgs built just fine (tm) for developers a, b and d (not only on, but also on multi-core and/or SMP machines) while it didn't for devs c, f, g, and, much worse, for users u, y and z.
I mean, feel free to learn the sane default for (said 590) pkgs from pkgsrc, or consider the process that spans from 2007 until now, where pkgs still are flagged MAKE_JOBS_SAFE after the first user has run into them not building.
As far as I know, MAKE_JOBS_SAFE and 'options="!makeflags"' are packaging "tricks" to work around an upstream bug. Enabling parallel builds by default would only reveal bugs, which is a good thing generally, thus I don't understand your objections. Cheers, -- Timothée Ravier
On Fri, Jan 03, 2014 at 04:44:05PM +0100, Timoth?e Ravier wrote:
On 03/01/2014 16:24, Martin S. Weber wrote:
(...)
As far as I know, MAKE_JOBS_SAFE and 'options="!makeflags"' are packaging "tricks" to work around an upstream bug.
I agree with that assessment.
Enabling parallel builds by default would only reveal bugs, which is a good thing generally, thus I don't understand your objections.
Yes, it is a good thing generally. I'm not sure 'objection' is the right term though, I'd like to think of myself presenting angles of view onto the subject that have been under-represented. Anyways, why I bring them up: From my experience _watching_ pkgsrc development (I have not been involved myself other than being a pkgsrc user for a decade, talking to devs from time to time, opening PRs etc.), upstream developers care for another thing than package maintainers. Yes, in the end, both want to deliver software to users, but ISTM upstream devs are annoyed by the partially religious appearing dogmas of the packagers (which the latter often have for good reason). While upstream devs are specialists for the domain of software they are writing, ISTM that packagers' domain of specialty does not overlap with upstream's. In other words, say, packager knows everything about toolchain, API and ABI compatibility, static vs. dynamic linking, etc.; while upstream dev does not know (as much). Sermons from packager from pkg systems A, B, C, D (e.g., pkgsrc, apt guys, arch guys, rpm guys, from different distributions & companies etc.) sooner or later provoke ignorance from upstream dev (which is, to me, humanly understandable). In other words again: you might find what you consider a bug, but upstream will not care, ignore your patches, not listen to you or simply get annoyed (witnessed instances of this before). Read e.g. this message from the author of SQLite and fossil about packager's need/want for splitting dynamic libs from projects using said dynamic libs (sqlite in that instance): http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg14153.html Richard is a great sw dev (as witnessed by the artifacts he created or helped to create), but a packager has a different point of view, area of expertise, and might come to a different conclusion. And that's perfectly fine, to some extent. IMHO you're just about to open a can of worms that only you yourself will want to swallow, for a) the greater good of squishing bugs and b) no change in the big picture. I suppose _I_ know enough about the dist-dependant way of turning off parallel builds to scratch when it itches ... and I think I have presented a different angle onto the issue, my job is done :) Regards, -Martin
On 03/01/2014 17:46, Martin S. Weber wrote:
Sermons from packager from pkg systems A, B, C, D (...) sooner or later provoke ignorance from upstream dev (...).
No sermons should ever be made to upstream by packagers. Upstream should not be blamed for bugs, but helped with. This is free software, let's just try to be nice to each other.
In other words again: you might find what you consider a bug, but upstream will not care, ignore your patches, not listen to you or simply get annoyed (witnessed instances of this before).
As you've said, the packagers should provide the patch to fix the issue. If upstream does not want that then it's fine, the packager should use the "trick". I hardly understand why would any upstream refuse a patch to fix parallel build issues.
Read e.g. this message from the author of SQLite and fossil about packager's need/want for splitting dynamic libs from projects using said dynamic libs (sqlite in that instance):
http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg14153.html
I'm not sure this issue is related to the current discussion, as he is upset with distributions packaging old software with new one, something never done in Arch as far as I know:
I hate having to add silly work-arounds in the code to accommodate distributions trying to use an older SQLite with a newer Fossil.
And I think I remember that Arch does have static libraries when it makes sense.
IMHO you're just about to open a can of worms that only you yourself will want to swallow, for a) the greater good of squishing bugs and b) no change in the big picture.
There is nothing to "swallow" here as there already is an option to bypass parallel build. I'm not opening anything either as I'm just not against the proposed change. It looks like a convenience change, which does not really have arguments against it as it's just a default setting. -- Timothée Ravier
On Fri, Jan 03, 2014 at 07:06:31PM +0100, Timoth?e Ravier wrote:
On 03/01/2014 17:46, Martin S. Weber wrote: (...)
(...) Read e.g. this message from the author of SQLite and fossil about packager's need/want for splitting dynamic libs from projects using said dynamic libs (sqlite in that instance):
http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg14153.html
I'm not sure this issue is related to the current discussion, as he is upset with distributions packaging old software with new one, something never done in Arch as far as I know:
I hate having to add silly work-arounds in the code to accommodate distributions trying to use an older SQLite with a newer Fossil.
yes, it's a diff issue, but shows how upstream & packagers have a diff angle of view onto the subject.
IMHO you're just about to open a can of worms that only you yourself will want to swallow, for a) the greater good of squishing bugs and b) no change in the big picture.
There is nothing to "swallow" here as there already is an option to bypass parallel build. I'm not opening anything either as I'm just not against the proposed change. It looks like a convenience change, which does not really have arguments against it as it's just a default setting.
hmm. As I said, I brought up my part, may the powers that be act wisely. Regards, -Martin
On 12/31/2013 12:51 AM, Sébastien Leblanc wrote:
I would advise against doing that, considering that there are at least a handful of packages (can't name them) that have broken or otherwise malfunctioning Makefiles when run in parallel.
There are more than a few. If you get a PKGBUILD from any of the Arch repos of AUR, you can be relatively sure building in parallel will work. However, if you a working on any other independent project, the setting j > 1 can cause extreme build issues. I ran into this exact issue with the trinitydesktop project. Building with -j4 greatly reduced build time, but numerous 'random' build failures were introduced. Building with -j1 was reliable. (this problem has likely been eliminated now) Setting to build in parallel by default maximizing all cores can bite you. I screwed myself by setting: cat /etc/makepkg.conf <snip> #MAKEFLAGS="-j2" CPUCORES=$(grep -c "^processor" /proc/cpuinfo) if test $CPUCORES -gt 1; then MAKEFLAGS="-j${CPUCORES}" fi <snip> The bottom line, building custom projects in parallel (from someone else's code) should be avoided until you are sure you have eliminated all other issues, then start building in parallel. That is better handled on the command line than in makepkg.conf (just my $.02) -- David C. Rankin, J.D.,P.E.
On Sun, 2014-01-05 at 15:42 -0600, David C. Rankin wrote:
On 12/31/2013 12:51 AM, Sébastien Leblanc wrote:
I would advise against doing that, considering that there are at least a handful of packages (can't name them) that have broken or otherwise malfunctioning Makefiles when run in parallel.
There are more than a few. If you get a PKGBUILD from any of the Arch repos of AUR, you can be relatively sure building in parallel will work. However, if you a working on any other independent project, the setting j > 1 can cause extreme build issues. I ran into this exact issue with the trinitydesktop project. Building with -j4 greatly reduced build time, but numerous 'random' build failures were introduced. Building with -j1 was reliable. (this problem has likely been eliminated now) Setting to build in parallel by default maximizing all cores can bite you. I screwed myself by setting:
cat /etc/makepkg.conf <snip> #MAKEFLAGS="-j2" CPUCORES=$(grep -c "^processor" /proc/cpuinfo) if test $CPUCORES -gt 1; then MAKEFLAGS="-j${CPUCORES}" fi <snip>
The bottom line, building custom projects in parallel (from someone else's code) should be avoided until you are sure you have eliminated all other issues, then start building in parallel. That is better handled on the command line than in makepkg.conf (just my $.02)
Salutations! This all boils down to what does Arch consider a bug. If code that cannot be compiled in parallel is a bug then Arch should make parallel building the default (since these are bugs that upstream should fix). If instead it is not a bug but is the intention of upstream developers, then it shoudln't be enabled by default. Who is responsible for ensuring parallel building works? I am personally for the parallel compiling since I've only encountered one package that doesn't build in parallel (early version of mpich). At the very least, the default value of the commented out MAKEFLAGS could be changed to "-j$(nproc)" instead of "-j". Regards, Mark -- Mark Lee <mark@markeelee.com>
On Sunday 05 Jan 2014 23:03:23 Mark Lee wrote:
This all boils down to what does Arch consider a bug. If code that cannot be compiled in parallel is a bug then Arch should make parallel building the default (since these are bugs that upstream should fix). If instead it is not a bug but is the intention of upstream developers, then it shoudln't be enabled by default. Who is responsible for ensuring parallel building works?
It's worse than that: it's not just a bug, it's a randomly-occurring bug. It's something that users need to be aware of *before* they encounter it, because it will not produce a reliable error message that can be searched for, and may not even be descriptive. I think it makes more sense to suggest the option via the wiki, or in the makepkg.conf file, along with a warning about potential occasional breakage. This seems the most Arch-like solution to me: it's a safe default, but offers improved performance for users willing to take the time to poke around and understand better. Paul
participants (15)
-
Allan McRae
-
Anatol Pomozov
-
Daniel Leining
-
David C. Rankin
-
Isaac Dupree
-
Karol Blazewicz
-
Leonid Isaev
-
Mark Lee
-
Martin S. Weber
-
Martti Kühne
-
Paul Gideon Dann
-
Sébastien Leblanc
-
Thomas Bächler
-
Timothée Ravier
-
Øyvind Heggstad