AutoUpdateBot harmful
Hello, just noticed that the denaro package [1] was updated by AutoUpdateBot two days ago. That account seems to be a bot based on some Github actions stuff [2]. This update broke the package, as it wasn't as simple as "change version number and checksum" (see my comment at the package site). Investigating a bit further I noticed this bot does not seem to do any checks if the package is still buildable or anything [3]. My question now: is there any policy about such bots? Are they allowed? Should they at least test if the package still builds before uploading the package update? Thanks j.r [1] https://aur.archlinux.org/packages/denaro [2] https://github.com/arch4edu/aur-auto-update [3] https://github.com/arch4edu/aur-auto-update/actions/runs/4655655968/jobs/825...
Hello, Although I am unsure of the rules, I have seen autobots been used a lot through the use of github CI/CD. I do not believe they are explicitly banned, people often automate the bumping of new versions, but it should be tested before being pushed to the AUR, ideally built and tested in a clean chroot. However, the large majority of AUR packages pushed to github use autoupdaters in some capacity, so if they are banned, a lot of housekeeping would need to be done. One of the projects I co-maintain uses a script to autobump the version, but it tests it and also submits it as a PR, it is then manually checked and then merged into master before being pushed to the AUR. Bots should never push to the AUR, whether it is within the guidelines or not, its all fun when you can sit back and be lazy, but once the packaging changes, you have just shipped a broken build to all the people using AUR helpers.
Should they at least test if the package still builds before uploading the package update?
I do not believe you are forced to, but it is definitely a recommendation to build the package in a clean chroot before pushing it to the AUR. Remember the AUR is like a massive landfill, go digging through the rubbish long enough you will find a good package. The TUs are there to sieve through it all and ensure it is kept to some standard (and filter out illegal or malicious content). TL;DR don't expect official repository grade packages within the AUR. I guess some maintainers just want the easy way, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
My personal 2 cents to on this topic: All of my packages are maintained by CI and are auto-updating. I don't have the time (or to phrase it better: I'm not willing to invest the time) to do tasks, I can easily automate, manually. On the other hand all of my package-update-automations are patching the build and then executing it in a clean environment. If the package does not build the automation will break and notify me to have a look at what's broken. In the end: What's the difference between a maintainer just modifying version and checksums and then pushing the broken package to AUR and an automation doing the same? Also: What's the difference between a maintainer patching version and checksums, executing a clean build and then pushing it and an automation doing the same? - Nothing. So yeah, in my opinion maintainers (or automations) should at least do a clean build on update before pushing it. Putting up a policy against automations will just lead to maintainers still doing it in secret or to maintainers dropping a bunch of packages to orphan. -- Knut Ahlers Software & Infrastructure Developer Web & Blog: https://ahlers.me/ GPG-Key: 0xCB681B44 (https://knut.in/gpg)
Am Mittwoch, 12. April 2023 15:58:58 CEST schrieb Knut Ahlers:
My personal 2 cents to on this topic:
All of my packages are maintained by CI and are auto-updating. I don't have the time (or to phrase it better: I'm not willing to invest the time) to do tasks, I can easily automate, manually. On the other hand all of my package-update-automations are patching the build and then executing it in a clean environment. If the package does not build the automation will break and notify me to have a look at what's broken.
In the end: What's the difference between a maintainer just modifying version and checksums and then pushing the broken package to AUR and an automation doing the same? Also: What's the difference between a maintainer patching version and checksums, executing a clean build and then pushing it and an automation doing the same? - Nothing.
So yeah, in my opinion maintainers (or automations) should at least do a clean build on update before pushing it. Putting up a policy against automations will just lead to maintainers still doing it in secret or to maintainers dropping a bunch of packages to orphan.
Full ack. When I maintained ~ 400 packages, the only way ship them was to have a CI building them in a clean environment and autopush them on successfull builds. Before I had this, packages were broken all the time. Clean build should be necessary before pushing any package, not just ones built by a CI. Regards, Oskar
On Wed, Apr 12, 2023 at 04:38:27PM +0200, Oskar Roesler wrote:
Am Mittwoch, 12. April 2023 15:58:58 CEST schrieb Knut Ahlers:
My personal 2 cents to on this topic:
All of my packages are maintained by CI and are auto-updating. I don't have the time (or to phrase it better: I'm not willing to invest the time) to do tasks, I can easily automate, manually. On the other hand all of my package-update-automations are patching the build and then executing it in a clean environment. If the package does not build the automation will break and notify me to have a look at what's broken.
In the end: What's the difference between a maintainer just modifying version and checksums and then pushing the broken package to AUR and an automation doing the same? Also: What's the difference between a maintainer patching version and checksums, executing a clean build and then pushing it and an automation doing the same? - Nothing.
So yeah, in my opinion maintainers (or automations) should at least do a clean build on update before pushing it. Putting up a policy against automations will just lead to maintainers still doing it in secret or to maintainers dropping a bunch of packages to orphan.
Full ack. When I maintained ~ 400 packages, the only way ship them was to have a CI building them in a clean environment and autopush them on successfull builds. Before I had this, packages were broken all the time. Clean build should be necessary before pushing any package, not just ones built by a CI.
Regards,
Oskar
"Autopush on successful builds" being the key here, imo. A lone misconfigured CI build that doesn't thoroughly test before pushing (or perhaps missing a test) shouldn't be grounds for banning any and all CI/CD here. I don't believe anyone is suggesting that CI/CD is completely disabled here, but the key topic of 'thoroughly testing, automatically' is probably the avenue to follow. I'd suggest that the package build in the original message simply gets another once-over to improve the robustness. -- Tom Swartz
A lone misconfigured CI build that doesn't thoroughly test before pushing (or perhaps missing a test) shouldn't be grounds for banning any and all CI/CD here.
Automation is dumb, you cant expect automation to test every single possibility, it does not have a brain and it can't use common sense. If a package installs, but for example, you forgot to copy the data over to the package, sure it will build, CI/CD will pass and this will be pushed to remote, but its still a dysfunctional package. You an not expect CI/CD to do all the work for you, devops is to aid you, not to replace your role.
I don't believe anyone is suggesting that CI/CD is completely disabled here, but the key topic of 'thoroughly testing, automatically' is probably the avenue to follow.
Don't get me wrong, CI/CD is amazing, I would be disappointed if it was banned. But, it needs to be used within moderation, there are a large amount of maintainers which will just CI/CD it and forget about the entire package, not test it, and when all goes wrong, they abandon the package because they are too busy to maintain it. I do not care what anyone says, this is not the right attitude to have to maintaining packages. If you don't care enough about your packages to ensure they work, then why you even a package maintainer?
I'd suggest that the package build in the original message simply gets another once-over to improve the robustness.
Ok, if this is such an amazing idea, why do you think TUs still bump the official packages manually? Test them? TU's devout a large amount of time to maintaining their packages, those of you who want to let a automation script do it all for you kinda kicks the TUs in the balls, you are trying to highlight that TUs are doing it all wrong, or thats how I feel from this, I personally think it is disregarding the time given by TUs. I get it we are all busy, but saying you MUST use CI/CD because you are too busy otherwise, makes TUs role seem p*ss easy. The AUR is like a training ground for future TUs, if you can't handle maintaining packages conforming to the packaging guidelines now, if you ever want to become TU, how would you cope? Bumping packages, and submitting a PR is what I suggest, it takes a lot of the manual work out of it, you then review the package, rebuild it mnaually, check the structure is correct, check it runs, check there is no dependencies missing, check the release notes for incompatibilities. WHERE is this all within a CI/CD task? Last time I checked the CI/CD tasks aren't human! Seriously guys, there is a reason that CI/CD isn't mass promoted by TU's, use it sparingly to do repetitive work, but it does not replace your job! -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
First of all: thanks for all those responses, looks like this is quite a controversial topic.
Automation is dumb, you cant expect automation to test every single possibility, it does not have a brain and it can't use common sense.
If a package installs, but for example, you forgot to copy the data over to the package, sure it will build, CI/CD will pass and this will be pushed to remote, but its still a dysfunctional package.
That's exactly what I wanted to say initially. The specific case I referred was just an example what could go wrong and this case could be spotted by CI actually, because makepkg itself fails with an error. But if for example additional build steps become necessary between versions, but the existing build steps still work without failing CI can't spot it. The package will look good, but logically it might be completely broken. So I personally agree with Polarian that it's the job of a good packager to do at least some basic functionality testing (e.g does the software still start) before pushing it to the AUR. What I take from the discussion, there is currently not clear policy on this topic, but maybe one should be established? Or at least the wiki could recommend it as best practice somewhere? I'm not that experienced with what the correct way of continuing this topic is, so help appreciated. Thanks j.r
On 12/04/2023 23:43, j.r wrote:
First of all: thanks for all those responses, looks like this is quite a controversial topic.
Automation is dumb, you cant expect automation to test every single possibility, it does not have a brain and it can't use common sense.
If a package installs, but for example, you forgot to copy the data over to the package, sure it will build, CI/CD will pass and this will be pushed to remote, but its still a dysfunctional package.
That's exactly what I wanted to say initially. The specific case I referred was just an example what could go wrong and this case could be spotted by CI actually, because makepkg itself fails with an error.
But if for example additional build steps become necessary between versions, but the existing build steps still work without failing CI can't spot it. The package will look good, but logically it might be completely broken.
So I personally agree with Polarian that it's the job of a good packager to do at least some basic functionality testing (e.g does the software still start) before pushing it to the AUR. What I take from the discussion, there is currently not clear policy on this topic, but maybe one should be established? Or at least the wiki could recommend it as best practice somewhere? I'm not that experienced with what the correct way of continuing this topic is, so help appreciated.
Package updates always require some sort of manual review, you can't blindly bump a package to the next version as you might encounter: * The license changed * Dependencies changed or there are new dependencies * Etc. I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.
On Thu, 13 Apr 2023 15:13:49 +0200 Jelle van der Waa <jelle@vdwaa.nl> wrote: I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.
In simple terms, automation is good, using it carelessly is bad. How can the Arch community educate the aur maintainers to not push untested PKGBUILDs?
Hello,
I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.
Yes, but the main part of this thread was not to discuss automation, that is not the issue. The issue is people want fully automated maintaining of AUR packages, which is just not possible. And this entire thread has been trying to prove CI/CD is intelligent enough to spot ALL the errors which could occur during a build and say that CI/CD can check them all, which is just not possible. I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on. Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually. If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
In simple terms, automation is good, using it carelessly is bad.
Well even carelessly it isn't bad, someone should always check whatever CI/CD task which is executed. I feel developers rely too much on CI/CD these days and want to write a script to do everything, without realising that their intervention is still needed.
How can the Arch community educate the aur maintainers to not push untested PKGBUILDs?
You cant! As I highlighted in a earlier post, the AUR is digging through PKGBUILDs until you find a decent one. TUs don't have all day to go through all packages and check whether they are sticking to the packaging guidelines. As far as I am aware, they mainly focus on ensuring no illegal packages are on the AUR, and no malicious packages, along with dealing with disputes about who should maintain what. If you feel that a package is poorly maintained, ask for co-maintainer or submit a patch to fix whatever it is. I do feel the packaging guidelines should reflect what both Anthraxx and Jelle has recommended. And I kindly ask for the sake of the people who use fully automated packages to please stop, you do not need to update the package within 2 hours of it being flagged out of date, you have upwards of 6 months until the package is orphaned, there is no reason to not be reviewing your packages before committing. Remember to always check a package before you build it, hopefully the TUs find and remove most of the malicious packages, but you better be safe than sorry. Have a good evening, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Hello, I think I have to chime in hear another time, to at least give a bit more context to the situation I faced, because this discussion is missing a lot of context about which problems people may run into and solve with more automation. (At least we, the ros-melodic on AUR people,) never had fully automated generation of new PKGBUILDs, you'd still had to confirm a diff and then push it to the CI/CD. There are large frameworks out there that aren't maintainable without automation and their automation also does not have the problems mentioned here. Am Donnerstag, 13. April 2023 17:44:44 CEST schrieb Polarian:
The issue is people want fully automated maintaining of AUR packages, which is just not possible. And this entire thread has been trying to prove CI/CD is intelligent enough to spot ALL the errors which could occur during a build and say that CI/CD can check them all, which is just not possible.
How do you even mean this? If I take this literrally, you'd have to untar the pkg after building it and dig through all the files. I bet you not doing that. A machine tool is much more efficient and less error-prone on this, that's why namcap exists. When pkg builds fine and namcap doesn't warn about anything in the CI, what has been left out?
I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on.
At least in the case of ROS, pkgs provide extensive metdata via rosdep for their own build system. We parsed the metadata and things such as license or dependencies got autogenerated. If there are any incompatibilities with Arch, well then the build will fail in the CI, no mirroring to AUR will happen, why do you even come up with this? It's just wrong.
Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually.
If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
This ignores all cases you personally haven't run into. I didn't maintain ~400 packages to flex, feel good or smth similar, I did get them under one umbrella again because in the time multiple people only cared about a few parts of the framework, it was all in the "worked for me this one day and I haven't updated since then" state. Maintainers wildly updated when they had time, breaking pkgs that depended upon the old version. Once I overtook the majority of ROS melodic packages on the AUR, I mirrored them to Github, to allow people to propose contributions without having direct access. I couldn't spent 20 hours in a week where a new revision of the core parts were released. In the beginning, I tested every contribution on my machine and suprise - this led to issues because I didn't had the 200 deps built in clean envs lying around somewhere. We then developed a CI/CD systemd that automatically rebuild the whole dependency chains of packages, testing everything against freshly built packages and not some packages that were still ABI compatible out of sheer luck. Later on, we started using a python script parsing rospkg metadata and updating the PKGBUIlDs with find+replace, so that patches we applied on some of the packages weren't deleted. It's cmdline tool and you still had to confirm the diff, but that's it. With those 3 things, github+CI+updatehelper script we finally got the ROS melodic packages into a state where ROS melodic was working permanently and not only 2 weeks after some maintainers had spent a lot of manual time. We were never able to test the whole framework manually, as no one would ever use all the non-core parts or have a > 10.000 line codebase to test all the packages after installation, but with GH we were able to fix issues of users of some obscure package quickly, even if they hadn't figured the solution and submitted a PR theirselves.
In simple terms, automation is good, using it carelessly is bad. Well even carelessly it isn't bad, someone should always check whatever CI/CD task which is executed. I feel developers rely too much on CI/CD these days and want to write a script to do everything, without realising that their intervention is still needed.
Can you elaborate this further with an example? The whole statement is so general, I don't know what you what to imply with that? Doesn't it even contradict itself in the beginning? CI/CD was invented because pushing from dev machines into prod had issues. AutoUpdateBot may be harmful because there isn't a quick manual overlook and it ignores everything except updates version, pkgrel and checksums. Heck, it doesn't even try to build the pkg before pushing. But all those problems are problems with AutoUpdateBot, not with (semi-)autoupdaters and definately not with "CI before actual push" systems. So to the ppl with an extreme position on this, please adopt a more differentiated view on the general topic. I stopped doing ROS melodic packages because the SW got older and a newer ROS release had been adopted by the majority, while I had to backport more and more libraries to keep it running. Regards, Oskar
Hello everyone, I rarely ever join these discussions, so please bear with me, should I breach the code of conduct somehow. (And yes, I am fully aware I will live to regret this mail) Quoting Polarian <polarian@polarian.dev>:
Hello,
I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.
Yes, but the main part of this thread was not to discuss automation, that is not the issue. The issue is people want fully automated maintaining of AUR packages, which is just not possible. And this entire thread has been trying to prove CI/CD is intelligent enough to spot ALL the errors which could occur during a build and say that CI/CD can check them all, which is just not possible.
I disagree here. Nobody tried to make the point that CI/CD can spot all errors. But I assume and hope your point is also not that a human can do so. I know *I* cannot. Automation and CI/CD is an excellent choice for everything that *can* be automated. Your example of "does it compile on Arch"? 10/10 for automation. Reproducible and a binary result. Why not automate that check?
I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on.
Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually.
If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
Is that an official statement, or your personal opinion? It sounds opinionated to me. I had a quick look at the AUR submission guidelines, but was unable to find anything on automation or required pre-update checks, at all. If you can point me to relevant documentation, I'd appreciate that a lot. Also, I think it's accepted that the AUR has varying degrees of quality and the requirements are less strict than for core repositories. Is that a good thing? Depends, if you want a large package base, then yes. If you want a staging area for packages that *maybe* eventually get migrated into a core repo, also yes. If you want packages of the highest quality, then no. But IMHO, having a "vetting area" for packages makes tons of sense. Should quality be high in the AUR as well? Ideally yes, but welcome to the real world. The AUR packages are maintained by volunteers, who are not paid and who have a life outside the AUR (hopefully). I have the highest respect for people that maintain hundred or thousands of packages for the benefit of others, and if automation makes their lives easier - more power to them! I'd rather have a package that breaks occasionally and needs manual intervention *then*, rather than not having that package in AUR at all. But then: Take that with a grain of salt, it's personal opinion, not canon. Really, all of this is about *tradeoffs*. If I have a fully automated CI/CD pipeline for my AUR package, and it breaks once every two years, is that good enough? In my book, absolutely, but your milage may vary, and you are welcome to maintain your AUR packages differently. I am maintaining an AUR package for which I am also the main contributor, for example. I know when the dependencies change and when the license changes, so for me, 100% automation makes sense (my CI/CD pipeline catches build errors and has a decent test coverage, of course). Other maintainers have different situations and requirements. This finally is my main point: There is no simple "this is the right way to do it, and everything else is wrong". Rather, a lot of those decisions are situative and subjective. I don't believe in simple answers and "one size fits all" solutions in software engineering. There's just various degrees of broken-ness :)
In simple terms, automation is good, using it carelessly is bad.
Well even carelessly it isn't bad, someone should always check whatever CI/CD task which is executed. I feel developers rely too much on CI/CD these days and want to write a script to do everything, without realising that their intervention is still needed.
How can the Arch community educate the aur maintainers to not push untested PKGBUILDs?
You cant!
As I highlighted in a earlier post, the AUR is digging through PKGBUILDs until you find a decent one. TUs don't have all day to go through all packages and check whether they are sticking to the packaging guidelines. As far as I am aware, they mainly focus on ensuring no illegal packages are on the AUR, and no malicious packages, along with dealing with disputes about who should maintain what.
If you feel that a package is poorly maintained, ask for co-maintainer or submit a patch to fix whatever it is.
I do feel the packaging guidelines should reflect what both Anthraxx and Jelle has recommended. And I kindly ask for the sake of the people who use fully automated packages to please stop, you do not need to update the package within 2 hours of it being flagged out of date, you have upwards of 6 months until the package is orphaned, there is no reason to not be reviewing your packages before committing.
My 2c: I will only switch from an automated workflow to a manual one if I see benefits to it. For my personal use cases, the automation is much less error prone than I am, and much more reliable (it never gets tired of checking the same old corner cases). Asking for 100% reliability is IMHO not realistic, in any of those scenarios. Thank you very much for reading until here, And if we shadows have offended, think but this, and all is mended: That you have all but slumbered here, while these visions did appear.
Remember to always check a package before you build it, hopefully the TUs find and remove most of the malicious packages, but you better be safe than sorry.
Have a good evening, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
You've spoken the words inside my mind. I create this bot since sometimes I just don't even have time to do a simple version bump. If someone feels that they can do better and would like to maintain any of my package, please comment under that package and I'll add you as a co-maintainer to replace the auto update bot. 在2023年04月14 01时09分,"tobi"<tobi@tobi-wan-kenobi.at>写道: Hello everyone, I rarely ever join these discussions, so please bear with me, should I breach the code of conduct somehow. (And yes, I am fully aware I will live to regret this mail) Quoting Polarian <polarian@polarian.dev>:
Hello,
I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.
Yes, but the main part of this thread was not to discuss automation, that is not the issue. The issue is people want fully automated maintaining of AUR packages, which is just not possible. And this entire thread has been trying to prove CI/CD is intelligent enough to spot ALL the errors which could occur during a build and say that CI/CD can check them all, which is just not possible.
I disagree here. Nobody tried to make the point that CI/CD can spot all errors. But I assume and hope your point is also not that a human can do so. I know *I* cannot. Automation and CI/CD is an excellent choice for everything that *can* be automated. Your example of "does it compile on Arch"? 10/10 for automation. Reproducible and a binary result. Why not automate that check?
I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on.
Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually.
If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
Is that an official statement, or your personal opinion? It sounds opinionated to me. I had a quick look at the AUR submission guidelines, but was unable to find anything on automation or required pre-update checks, at all. If you can point me to relevant documentation, I'd appreciate that a lot. Also, I think it's accepted that the AUR has varying degrees of quality and the requirements are less strict than for core repositories. Is that a good thing? Depends, if you want a large package base, then yes. If you want a staging area for packages that *maybe* eventually get migrated into a core repo, also yes. If you want packages of the highest quality, then no. But IMHO, having a "vetting area" for packages makes tons of sense. Should quality be high in the AUR as well? Ideally yes, but welcome to the real world. The AUR packages are maintained by volunteers, who are not paid and who have a life outside the AUR (hopefully). I have the highest respect for people that maintain hundred or thousands of packages for the benefit of others, and if automation makes their lives easier - more power to them! I'd rather have a package that breaks occasionally and needs manual intervention *then*, rather than not having that package in AUR at all. But then: Take that with a grain of salt, it's personal opinion, not canon. Really, all of this is about *tradeoffs*. If I have a fully automated CI/CD pipeline for my AUR package, and it breaks once every two years, is that good enough? In my book, absolutely, but your milage may vary, and you are welcome to maintain your AUR packages differently. I am maintaining an AUR package for which I am also the main contributor, for example. I know when the dependencies change and when the license changes, so for me, 100% automation makes sense (my CI/CD pipeline catches build errors and has a decent test coverage, of course). Other maintainers have different situations and requirements. This finally is my main point: There is no simple "this is the right way to do it, and everything else is wrong". Rather, a lot of those decisions are situative and subjective. I don't believe in simple answers and "one size fits all" solutions in software engineering. There's just various degrees of broken-ness :)
In simple terms, automation is good, using it carelessly is bad.
Well even carelessly it isn't bad, someone should always check whatever CI/CD task which is executed. I feel developers rely too much on CI/CD these days and want to write a script to do everything, without realising that their intervention is still needed.
How can the Arch community educate the aur maintainers to not push untested PKGBUILDs?
You cant!
As I highlighted in a earlier post, the AUR is digging through PKGBUILDs until you find a decent one. TUs don't have all day to go through all packages and check whether they are sticking to the packaging guidelines. As far as I am aware, they mainly focus on ensuring no illegal packages are on the AUR, and no malicious packages, along with dealing with disputes about who should maintain what.
If you feel that a package is poorly maintained, ask for co-maintainer or submit a patch to fix whatever it is.
I do feel the packaging guidelines should reflect what both Anthraxx and Jelle has recommended. And I kindly ask for the sake of the people who use fully automated packages to please stop, you do not need to update the package within 2 hours of it being flagged out of date, you have upwards of 6 months until the package is orphaned, there is no reason to not be reviewing your packages before committing.
My 2c: I will only switch from an automated workflow to a manual one if I see benefits to it. For my personal use cases, the automation is much less error prone than I am, and much more reliable (it never gets tired of checking the same old corner cases). Asking for 100% reliability is IMHO not realistic, in any of those scenarios. Thank you very much for reading until here, And if we shadows have offended, think but this, and all is mended: That you have all but slumbered here, while these visions did appear.
Remember to always check a package before you build it, hopefully the TUs find and remove most of the malicious packages, but you better be safe than sorry.
Have a good evening, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Hello Jingbei Li, On 4/13/23 19:43, Jingbei Li wrote:
Quoting Polarian <polarian@polarian.dev>:
I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on.
Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually.
If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
Is that an official statement, or your personal opinion? It sounds opinionated to me. I had a quick look at the AUR submission guidelines, but was unable to find anything on automation or required pre-update checks, at all. If you can point me to relevant documentation, I'd appreciate that a lot.
If you specifically seek for an official answer: Jelle hasn't used his official e-mail address, but there are two messages in this Thread from Jelle and me: - Jelle van der Waa <jelle@vdwaa.nl> - Levente Polyak <anthraxx@archlinux.org> Both of which you can consider official. Please deactivate your automation and fix the workflow. Thank you very much for your efforts and contributing to our community. Sincerely, Levente
Hello, Sorry for the late response, my server had a hardware failure and I lost my email for a few days. I was looking around the AUR, and I have realised almost all packages on github are using CI/CD to automatically bump and push to the AUR. Anthraxx and Jelle, if you are going to enforce this what actions should be taken against these packages? I have already submitted a issue on the github repository of one of them, to relay these wishes, but I am not a TU and have no power to enforce it, find the link to this issue below: https://github.com/funilrys/aur-rocketchat-server/issues/7 I believe https://wiki.archlinux.org/title/AUR_submission_guidelines should be updated to reflect the wishes highlighted, but as noodle <silentnoodle@cock.li> has highlighted, what are we meant to do to keep this under control, because right now its completely out of control. Replying to Oskar <oskar@oscloud.info>
I think I have to chime in hear another time, to at least give a bit more context to the situation I faced, because this discussion is missing a lot of context about which problems people may run into and solve with more automation. (At least we, the ros-melodic on AUR people,) never had fully automated generation of new PKGBUILDs, you'd still had to confirm a diff and then push it to the CI/CD. There are large frameworks out there that aren't maintainable without automation and their automation also does not have the problems mentioned here.
Having automation is not the issue, it is when the automation is never reviewed, and is automatically pushed to the AUR.
How do you even mean this? If I take this literrally, you'd have to untar the pkg after building it and dig through all the files. I bet you not doing that. A machine tool is much more efficient and less error-prone on this, that's why namcap exists. When pkg builds fine and namcap doesn't warn about anything in the CI, what has been left out?
It was an example, obviously it was a bad one and I should have spent more time thinking about the example to use. The point I was trying to make is that human beings are still needed to review the changes.
At least in the case of ROS, pkgs provide extensive metdata via rosdep for their own build system. We parsed the metadata and things such as license or dependencies got autogenerated. If there are any incompatibilities with Arch, well then the build will fail in the CI, no mirroring to AUR will happen, why do you even come up with this? It's just wrong.
I have wrote this several times now: Your job is not just to throw together a PKGBUILD, fling a licence onto it and push it to the AUR and call it a job done, you must do the following: - Review the licences, is it dual licenced? - Has the licence changed? - What does the change log say? Is there any incompatibilities? - Has the packaging procedure changed? - Does the package use a different build tool now? - Are there new dependencies? - Are there outdated dependencies which need removing? Jelle provided a few of these, in case you wanted a more complete list I have listed some addition ones there. Jelle is a TU and know what they are talking about, so why do you continue to try to prove Jelle wrong and continue advocating for automatic pushing to the AUR when it has been highlighted, by both Jelle (A TU) and Anthraxx (The leader of Arch Linux) that automatic pushing to the AUR is not allowed.
This ignores all cases you personally haven't run into. I didn't maintain ~400 packages to flex, feel good or smth similar, I did get them under one umbrella again because in the time multiple people only cared about a few parts of the framework, it was all in the "worked for me this one day and I haven't updated since then" state. Maintainers wildly updated when they had time, breaking pkgs that depended upon the old version. Once I overtook the majority of ROS melodic packages on the AUR, I mirrored them to Github, to allow people to propose contributions without having direct access. I couldn't spent 20 hours in a week where a new revision of the core parts were released. In the beginning, I tested every contribution on my machine and suprise - this led to issues because I didn't had the 200 deps built in clean envs lying around somewhere. We then developed a CI/CD systemd that automatically rebuild the whole dependency chains of packages, testing everything against freshly built packages and not some packages that were still ABI compatible out of sheer luck.
Firstly, I feel this has went from a discussion to slander at this point. It feels like you are trying to prove my point is invalid because you maintain more packages than me, I rather know that the packages I maintain are kept to a high standard, I will not update a package until it has been built and tested within a clean environment, by a human being, yes it takes more time, but I know when I push that commit (manually) to the AUR, the build is reproducible in ANY clean environment, I don't think you could say the same. It is quite simple, Arch Linux is a community, you donate your time. I am sorry but if you do not have time, then you can't donate it, using CI/CD to "save time" is like trying to pop money out of thin air to give to Arch Linux, it just isn't possible. So yes, if you can not take the workload, then give up some of your packages, its unfortunate, but life takes priority over contributing to a community.
Later on, we started using a python script parsing rospkg metadata and updating the PKGBUIlDs with find+replace, so that patches we applied on some of the packages weren't deleted. It's cmdline tool and you still had to confirm the diff, but that's it. With those 3 things, github+CI+updatehelper script we finally got the ROS melodic packages into a state where ROS melodic was working permanently and not only 2 weeks after some maintainers had spent a lot of manual time. We were never able to test the whole framework manually, as no one would ever use all the non-core parts or have a > 10.000 line codebase to test all the packages after installation, but with GH we were able to fix issues of users of some obscure package quickly, even if they hadn't figured the solution and submitted a PR theirselves.
As I have said, automation is not the problem, sure having CI/CD check for common mistakes, or do some tests are good, but you must still test it manually, read the change logs, do all the things I have listed in the above, otherwise you are not properly maintaining the package. Replying to tobi <tobi-wan-kenobi.at>
I rarely ever join these discussions, so please bear with me, should I breach the code of conduct somehow.
Nice to meet you, and I doubt it, the code of conduct is basically just "don't be an arse", with a few other conditions :P
(And yes, I am fully aware I will live to regret this mail)
I doubt it, sharing your opinion shouldn't be something you should be worried about, even if it drastically differs from others :)
I disagree here. Nobody tried to make the point that CI/CD can spot all errors. But I assume and hope your point is also not that a human can do so. I know *I* cannot.
Well, that is quite unfortunate. Yes, we don't always notice errors either, but I have more faith in a human being noticing an abnormal error, or something unexpected, rather than an automation task.
Automation and CI/CD is an excellent choice for everything that *can* be automated. Your example of "does it compile on Arch"? 10/10 for automation. Reproducible and a binary result. Why not automate that check?
I never said automation was bad, the point of the conversation was fully automated packages, ones which automatically bump their versions and push automatically to the AUR, both Anthraxx and Jelle have both highlighted this is not allowed.
Is that an official statement, or your personal opinion? It sounds opinionated to me. I had a quick look at the AUR submission guidelines, but was unable to find anything on automation or required pre-update checks, at all. If you can point me to relevant documentation, I'd appreciate that a lot.
Depends which section. About the part of maintainers looking into the package they are packaging, no that is official, I do not remember where, but I will quote Jelle here (A TU and thus official :P) to cover my arse:
Package updates always require some sort of manual review, you can't blindly bump a package to the next version as you might encounter:
* The license changed * Dependencies changed or there are new dependencies * Etc.
As for automation, that is the entire reason this thread was started, J.R did not know whether this was allowed or not, and that is what we are discussing, whether it is allowed or not (which Jelle and Anthraxx has ruled no on) and whether it is a good idea, which I gave my opinion on, which I am permitted to do.
Also, I think it's accepted that the AUR has varying degrees of quality and the requirements are less strict than for core repositories.
I highlighted this in previous emails, the AUR is very permissive when it comes to the PKGBUILD standards compared to official repositories, but this is no an excuse and we should strive to keep it as compliant to the guidelines and conventions as possible, after all the TUs might want to move the package into the official repositories, in that case the PKGBUILD must be a high quality, the nicest thing you can do is make the move as seemless as possible for the TU by keeping the PKGBUILD to a good standard.
Is that a good thing? Depends, if you want a large package base, then yes. If you want a staging area for packages that *maybe* eventually get migrated into a core repo, also yes. If you want packages of the highest quality, then no. But IMHO, having a "vetting area" for packages makes tons of sense.
Just because it is not as strict doesn't mean we should just be lazy and not bother to produce high quality PKGBUILDs. After all, the AUR is a sandpit to build packages which could later be adopted, or you could become a TU later down the line. Look at the TU application requirements, you are expected to maintain high quality packages, so yes it does matter: https://wiki.archlinux.org/title/Trusted_Users
Should quality be high in the AUR as well? Ideally yes, but welcome to the real world. The AUR packages are maintained by volunteers, who are not paid and who have a life outside the AUR (hopefully). I have the highest respect for people that maintain hundred or thousands of packages for the benefit of others, and if automation makes their lives easier - more power to them! I'd rather have a package that breaks occasionally and needs manual intervention *then*, rather than not having that package in AUR at all. But then: Take that with a grain of salt, it's personal opinion, not canon.
TU's are not paid either, nor are the Arch Developers, it is entirely voluntary, thus this argument is invalidated. We have lots of people who want to contribute, saying that the package will not be in the AUR if the guidelines were too strict means nothing. When a package is not in the AUR people feel the urge to push one, I pushed packages for software to the AUR because there was dead or non-existent packages for software listed within the ArchWiki.
Really, all of this is about *tradeoffs*. If I have a fully automated CI/CD pipeline for my AUR package, and it breaks once every two years, is that good enough? In my book, absolutely, but your milage may vary, and you are welcome to maintain your AUR packages differently.
This violates the rules outlined by Anthraxx, please see the following quote from a later email from them:
If you specifically seek for an official answer:
Jelle hasn't used his official e-mail address, but there are two messages in this Thread from Jelle and me: - Jelle van der Waa <jelle@vdwaa.nl> - Levente Polyak <anthraxx@archlinux.org>
Both of which you can consider official.
Please deactivate your automation and fix the workflow. Thank you very much for your efforts and contributing to our community.
again this should be added to the AUR guidelines, I will bring this up with the ArchWiki team tomorrow to get their opinion on it, maybe a TU can give their opinion on the addition?
Other maintainers have different situations and requirements. This finally is my main point: There is no simple "this is the right way to do it, and everything else is wrong". Rather, a lot of those decisions are situative and subjective. I don't believe in simple answers and "one size fits all" solutions in software engineering. There's just various degrees of broken-ness 😄
Although the point is valid, there is no one correct way, we have guidelines, and they should be adhered to. A disordered community is a useless community, I will quote Erus here as I liked it :P "Arch is a small community, we all must pull the carpet in the same direction" ~ Erus (this was paraphrased from memory) It makes a valid point, if we are all writing packages in different ways, we are pulling the carpet in complete other directions and we get nowhere, we must keep going in the same direction, this is the exact reason why Anthraxx is the elected leader of Arch Linux, they decide the path moving forward, and they ruled. No TUs seem to want Anthraxx toppled from power, as no vote to remove Anthraxx has been conducted, as far as I am aware at least. Which therefore means, what Anthraxx says goes, they made the official call so we must follow!
My 2c: I will only switch from an automated workflow to a manual one if I see benefits to it. For my personal use cases, the automation is much less error prone than I am, and much more reliable (it never gets tired of checking the same old corner cases). Asking for 100% reliability is IMHO not realistic, in any of those scenarios.
Humans make mistakes, even the core repositories do not have 100% reliability, it is rolling release after all, but that is why the Testing Team exists, and it seems to work (and I find it fun testing the software and signing off, no clue why its just soothing of sorts). We should always strive for 100%
Thank you very much for reading until here,
Wouldn't be fair if I did not give the time to read an email which you probably spent ages typing. Sorry for the long email, but I wanted to try to answer everything I have missed over the last few days, without causing massive amounts of noise within the mailing list. Have a good night, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
On 4/17/23 01:07, Polarian wrote:
Other maintainers have different situations and requirements. This finally is my main point: There is no simple "this is the right way to do it, and everything else is wrong". Rather, a lot of those decisions are situative and subjective. I don't believe in simple answers and "one size fits all" solutions in software engineering. There's just various degrees of broken-ness 😄
Although the point is valid, there is no one correct way, we have guidelines, and they should be adhered to.
A disordered community is a useless community, I will quote Erus here as I liked it :P
"Arch is a small community, we all must pull the carpet in the same direction" ~ Erus (this was paraphrased from memory)
It makes a valid point, if we are all writing packages in different ways, we are pulling the carpet in complete other directions and we get nowhere, we must keep going in the same direction, this is the exact reason why Anthraxx is the elected leader of Arch Linux, they decide the path moving forward, and they ruled.
No TUs seem to want Anthraxx toppled from power, as no vote to remove Anthraxx has been conducted, as far as I am aware at least.
Which therefore means, what Anthraxx says goes, they made the official call so we must follow!
I completely agree with the first part of your response, but I wanted to clarify a possible misconception in the latter part. As the project leader, I do have additional responsibilities, tasks, and duties, and try to drive our priorities inside a certain direction. But that doesn't mean my opinions and feedback should hold more weight than any other staff member associated with Arch Linux. It's crucial to me that we all operate on a level playing field and that my role doesn't affect the collaborative nature of our team. I may speak on behalf of our AUR maintenance team, but I do so as an equal member, not as a leader who rules on their sole opinion. Sincerely, Levente (while leaving the leadership hat inside the wardrobe)
Hey, On 23/04/17 12:07AM, Polarian wrote:
again this should be added to the AUR guidelines, I will bring this up with the ArchWiki team tomorrow to get their opinion on it, maybe a TU can give their opinion on the addition?
This already has been put into a draft[0] on the AUR Submission Guidelines page before. Currently I think this reflects the lowest common denominator brought up in the discussion, so maybe it needs to be adjusted further or is just right the way it is. In any case, People have requested for this to be reviewed by a larger audience so feel free to have a look at it! :) (Please just dont re-unroll the ML discussion there :D) I hope y'all have a great week, Chris [0] https://wiki.archlinux.org/title/Talk:AUR_submission_guidelines#Automation_a...
Hello,
This already has been put into a draft[0] on the AUR Submission Guidelines page before.
I was not aware of this, I will check it out now!
People have requested for this to be reviewed by a larger audience so feel free to have a look at it!
Well I guess anyone who is involved in this discussion should review the suggestion on the ArchWiki?
(Please just dont re-unroll the ML discussion there :D)
Well you could link to it... couldn't you?
I hope y'all have a great week,
Thanks, you too :) -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Sorry for the noise, I just want to correct my previous statement, I see Gromit (Chris) has already linked this mailing list, as "heated" -_- I did not think it was that heated... Anyways please ignore the statement about linking the ML as it has already been linked. Have a good day, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Full ack. When I maintained ~ 400 packages, the only way ship them was to have a CI building them in a clean environment and autopush them on successfull builds.
But you haven't actually tested the package installs, does the software work? Why do you think there are Arch Testers? To ensure that all the core packages are properly tested, no matter how important. Just because Arch Linux is rolling release doesn't mean we should throw stability right out the window!
Before I had this, packages were broken all the time. Clean build should be necessary before pushing any package, not just ones built by a CI.
Clean builds are fine to automate, just as long as they are reviewed after the build, before pushing it. Makepkg clearly shows warnings which could caause the package to not function as it should, you can't just assume something which is built works, and have a CI/CD task push it to the AUR. The reason that Arch TUs can maintain 1000-2000 packages each is because they have the Arch Testers, they can push a new build and get it tested by someone else. In the AUR, you do not have this luxury, simply stop being lazy and test the damn package, or orphan it for someone else to take your place. Or if you have a large following, employ your own testers for your packages :P -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
On Wed, Apr 12, 2023 at 06:01:10PM +0100, Polarian wrote:
Full ack. When I maintained ~ 400 packages, the only way ship them was to have a CI building them in a clean environment and autopush them on successfull builds.
But you haven't actually tested the package installs, does the software work?
Why do you think there are Arch Testers? To ensure that all the core packages are properly tested, no matter how important.
Core packages, sure. This is the AUR though. Additionally, if a package is broken and fails testing, it wouldn't be pushed by the CI/CD infra. In this particular case discussed here, there were no such checks, and I'm sure that issue will be re remedied shortly.
Just because Arch Linux is rolling release doesn't mean we should throw stability right out the window!
I don't think anyone is suggesting that. CI/CD testing expands and augments any other prior existing testing.
Before I had this, packages were broken all the time. Clean build should be necessary before pushing any package, not just ones built by a CI.
Clean builds are fine to automate, just as long as they are reviewed after the build, before pushing it.
Makepkg clearly shows warnings which could caause the package to not function as it should, you can't just assume something which is built works, and have a CI/CD task push it to the AUR.
In the AUR, you do not have this luxury, simply stop being lazy and test the damn package, or orphan it for someone else to take your place. Seems a bit extreme of a stance for a simple error in a package build which was
makepkg has a rich collection of error/exit codes which help specify this for testing. I, personally, see no reason why if the package is: - able to be built in a clean env - installs properly in another clean env - returns no error codes besides `0` from makepkg that it wouldn't be qualified for an automated push/update. After all, such inspection is likely what one would do manually, anyway. pushed, but you do you.
Or if you have a large following, employ your own testers for your packages :P
Good luck funding that :) If the package is large enough to fund multiple full-time testers, it probably won't be in the AUR, and would instead be in the main repos. At the end of the day, someone made a small mistake and it was published by an automated task. It was a small problem and I'm sure it will be fixed soon. I don't see any reason to instantly jump to extreme conclusions based on any of this. Regards, -- Tom Swartz
Hi, I'm the creator of AutoUpdateBot. This bot is originally designed as a bot for the arch4edu repository to automatically update the packages of the arch4edu maintainers on AUR. Then our build bot will test the new PKGBUILDs and we will fix any found error in a day or two or even downgrade the package on AUR when necessary. Then I decided to open this bot to everyone and I did consider about testing before pushing at that time. However, it's hard to test the PKGBUILD for those packages which have AUR dependencies. So I haven't set up any test yet. I still don't have a solution for package with AUR dependencies. But I'm planning to alleviate this issue by testing packages without AUR dependencies, sending an email to the maintainer when there's an update and adding a pinned comment to inform the users about the automatic updates. If you have any suggestion on how to improve this bot please reply to https://github.com/arch4edu/aur-auto-update/issues/30 . Best regards, Jingbei Li
On 4/12/23 20:02, Jingbei Li wrote:
Hi, I'm the creator of AutoUpdateBot. This bot is originally designed as a bot for the arch4edu repository to automatically update the packages of the arch4edu maintainers on AUR. Then our build bot will test the new PKGBUILDs and we will fix any found error in a day or two or even downgrade the package on AUR when necessary.
Then I decided to open this bot to everyone and I did consider about testing before pushing at that time. However, it's hard to test the PKGBUILD for those packages which have AUR dependencies. So I haven't set up any test yet.
I still don't have a solution for package with AUR dependencies. But I'm planning to alleviate this issue by testing packages without AUR dependencies, sending an email to the maintainer when there's an update and adding a pinned comment to inform the users about the automatic updates.
If you have any suggestion on how to improve this bot please reply to https://github.com/arch4edu/aur-auto-update/issues/30 .
Best regards, Jingbei Li
Hi Jingbei Li, Thank you very much for investing time, efforts and resources into Arch Linux and the AUR. Taking the currently described state into account I would like to kindly request that you stop and disable the automatic pushing. Bumping packages without any testing and check() is not a good thing, even when you try to revert afterwards. You can investigate how to setup a custom pacman repository for your AUR packages and make it accessible to your builders (f.e. via https). pacman provides low level tools for creating a database out of packages (repo-add, repo-remove) Then you can provide a custom pacman.conf to makechrootpkg containing the repository. You'd initially need to populate the repository starting from the leaf packages. If you have further questions I'm sure the community is open to help you out. Sincerely, Levente
We have caught the attention of the notorious Anthraxx :P
Taking the currently described state into account I would like to kindly request that you stop and disable the automatic pushing. Bumping packages without any testing and check() is not a good thing, even when you try to revert afterwards.
So that I don't imply the wrong thing from this, you are stating here that you should not have a bot/automatically push commits to the AUR?
You can investigate how to setup a custom pacman repository for your AUR packages and make it accessible to your builders (f.e. via https). pacman provides low level tools for creating a database out of packages (repo-add, repo-remove) Then you can provide a custom pacman.conf to makechrootpkg containing the repository. You'd initially need to populate the repository starting from the leaf packages.
I can help out with this if you like Jingbei Li, I have messed around with custom repositories and also dealing with AUR dependencies before, if you want to automate this I might be able to help out coding a custom tool with you? It seems you use Python, which I am confident in, so let me know if you want any help from me :) (feel free to offlist) (also as a sidenote, I remember the lack of documentation of custom repositories being brought up within ArchWiki, I should look back into this)
If you have further questions I'm sure the community is open to help you out.
See above :) Have a good day, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Am Mittwoch, 12. April 2023 20:39:48 CEST schrieb Levente Polyak:
I still don't have a solution for package with AUR dependencies. But I'm planning to alleviate this issue by testing packages without AUR dependencies, sending an email to the maintainer when there's an update and adding a pinned comment to infor ...
You can investigate how to setup a custom pacman repository for your AUR packages and make it accessible to your builders (f.e. via https). pacman provides low level tools for creating a database out of packages (repo-add, repo-remove) Then you can provide a custom pacman.conf to makechrootpkg containing the repository. You'd initially need to populate the repository starting from the leaf packages.
If you have further questions I'm sure the community is open to help you out.
When I packaged ~400 ROS melodic packages in collaboration with others, I already wrote a software that allows you to build packages with AUR deps in clean environments. I resolves the build order, optionally mirrors successfully built packages from a git host to the AUR and displays all the build logs on a website. You can also trigger the rebuild of a package or its whole dependency chain manually, otherwise the SW will check if the PKGBUILD repos got updates automatically. https://github.com/bionade24/abs_cd Regards, Oskar
Hi, I'm the creator of AutoUpdateBot. This bot is originally designed as a bot for the arch4edu repository to automatically update the packages of the arch4edu maintainers on AUR. Then our build bot will test the new PKGBUILDs and we will fix any found error in a day or two or even downgrade the package on AUR when necessary.
Might just be me, but this would annoy me a lot. You are not allowed to rewrite AUR history without a TU helping. And committing a commit to revert the previous commits changes is extremely confusing. In my opinion it would always be easier to ensure that you are 100% sure that the package works before pushing it to the AUR.
Then I decided to open this bot to everyone and I did consider about testing before pushing at that time. However, it's hard to test the PKGBUILD for those packages which have AUR dependencies. So I haven't set up any test yet.
Which seems to have picked up a lot of attraction from people who want to automate things, by the sounds of things it is because they do not want to waste time doing package bumps? I am not too sure the reasons to be honest... because I can't really relate to them.
I still don't have a solution for package with AUR dependencies. But I'm planning to alleviate this issue by testing packages without AUR dependencies, sending an email to the maintainer when there's an update and adding a pinned comment to inform the users about the automatic updates.
If you would like some help, offlist me, I can help out with this :) Have a good day, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Hey,
This is the AUR though.
Why should we treat AUR packages as any less? You don't treat code quality like its worthless if its a smaller project, you still write and maintain the codebase to the highest standard. This applies to the AUR too, just because its on the AUR doesn't mean you should throw the quality out the window!
Additionally, if a package is broken and fails testing, it wouldn't be pushed by the CI/CD infra.
How do you know? Just because something is built doesn't mean it is functional. You don't test code by compiling it and hoping it works if it complies successfully.
makepkg has a rich collection of error/exit codes which help specify this for testing.
I, personally, see no reason why if the package is: - able to be built in a clean env - installs properly in another clean env - returns no error codes besides `0` from makepkg that it wouldn't be qualified for an automated push/update.
Like I said, although it has a rich collection, a package can be built, just like code, but still contain logical issues within the package, such as you forgetting to move a binary into the package during the build, and thus the package being defective. You can't test for this within your CI/CD.
After all, such inspection is likely what one would do manually, anyway.
Manually pushing to the AUR is still the way to go, and considering botting of the AUR is disallowed (unless there are exceptions?) surely this includes CI/CD pushing commits to the AUR?
Seems a bit extreme of a stance for a simple error in a package build which was pushed, but you do you.
Well maybe I care too much about the quality of my PKGBUILD's, but is that a bad thing? Every time I rush, or do not spend the time the package deserves it is shipped partially or fully broken. Packaging takes time, you cant expect CI/CD to do it all for you.
Good luck funding that 😄 If the package is large enough to fund multiple full-time testers, it probably won't be in the AUR, and would instead be in the main repos.
It was a joke, sorry if I did not make that clear! (I thought :P would make it clear :/)
At the end of the day, someone made a small mistake and it was published by an automated task. It was a small problem and I'm sure it will be fixed soon. I don't see any reason to instantly jump to extreme conclusions based on any of this.
Because it is technically not within the packaging guidelines, and is disputed. Until someone with higher authority rules whether it should or shouldn't be used, it should be used sparingly. -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
Like I said, although it has a rich collection, a package can be > built, just like code, but still contain logical issues within the >
Well maybe I care too much about the quality of my PKGBUILD's, but is > that a bad thing? > > Every time I rush, or do not spend the time
On 4/12/23 20:55, Polarian wrote:> You don't test code by compiling it and hoping it works if it > complies successfully. True. Nobody claimed this to be what "testing" means, I think? package, such as you forgetting to move a binary into the package > during the build, and thus the package being defective. > > You can't test for this within your CI/CD. This is patently false. If you can inspect it manually, you can and should automate it. The example of a "forgotten binary" is hilariously trivial to test for, how is this even an argument? A totally naive test: `[[ -x pkg/usr/bin/foo ]]` Feel free to pimp that with `file` or some binutils magic to assert correct architecture and binfmt, but this is a perfectly fine test to start with. A conservative professional would create a tripwire-esque list of expected essential file locations and attributes, assert those en bloc, and fail if *anything* changes from that expectation, preventing the automated deployment. the package deserves it > is shipped partially or fully broken. > > Packaging takes time, you cant expect CI/CD to do it all for you. Cooking with love always tastes better, too, right? Sarcasm aside: How can you so obviously tell us that your process is broken, yet so adamantly insist on continue doing it that way? The whole point of test automation is to have a reliable, reproducible, and cheap process without random (human) errors. Solid system integration tests are very rigorously defined lists of steps and assertions to do and verify. Sounds a lot like something a computer would be good at, right? To prevent any misunderstandings: automated promotion of any artifact dropping out of a CD build pipeline without a 100% green, sufficiently comprehensive, automated test stage is a broken abomination. Period. Using namcap and adding package-specific smoke tests, like checking the exit code of running `foobin --version` in a clean Arch chroot, plus some filesystem plausibility checks beyond the generic namcap tests, plus any regression tests to make sure a previously found issue with the package doesn't resurface - that's a good baseline for a packager. *Functional* testing of software is upstream's business. A packager is first and foremost a system integrator, making sure the software fits snugly into the distribution-specific nest by, f. ex., adding systemd units that are not supplied by upstream for every distribution under the sun, and by _at least_ testing if a current snapshot of Arch is suitable to execute the binaries. Fun fact: nobody can tell if your well-tested AUR package even builds a day later, when Arch replaces glibc with µclibc, punts systemd entirely, and switches the binfmt from ELF to COFF. Yeah, not reasonable, but you don't seem to be aware how little stability Arch actually offers, and how much of it only stems from a lot of people sticking to many, many conventions, and knowingly accepting that things may break™. Funnily enough, I could easily run automated tests against a daily refreshed chroot with nightly builds, to get an early warning of such issues. Not going to happen without automation. Well, I deleted three pages worth of rant. :) Please do not spout automation FUD any more, it's painful. Please get over the D-K peak first. I'm out, touching some grass. Peace.
Helo,
All of my packages are maintained by CI and are auto-updating. I don't have the time (or to phrase it better: I'm not willing to invest the time) to do tasks, I can easily automate, manually. On the other hand all of my package-update-automations are patching the build and then executing it in a clean environment. If the package does not build the automation will break and notify me to have a look at what's broken.
Using CI/CD to automate a task isn't the problem, its where the task is manually pushing the package to the AUR without any testing.
In the end: What's the difference between a maintainer just modifying version and checksums and then pushing the broken package to AUR and an automation doing the same? Also: What's the difference between a maintainer patching version and checksums, executing a clean build and then pushing it and an automation doing the same? - Nothing.
Because automation can't spot errors, human beings can!
So yeah, in my opinion maintainers (or automations) should at least do a clean build on update before pushing it. Putting up a policy against automations will just lead to maintainers still doing it in secret or to maintainers dropping a bunch of packages to orphan.
Fine by me, I strongly believe if you adopt a package you should put all the effort you can into it, when one maintainer falls another takes their place. I test every build before I push it to upstream, apart from certain tools which I do not use or do not know how to use because I adopted it because it needed a maintainer. TL;DR automation on a different remote, sure, but automatically pushing to the AUR should be strictly prohibited! -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev
participants (12)
-
Christian Heusel
-
Dennis Herbrich
-
j.r
-
Jelle van der Waa
-
Jingbei Li
-
Knut Ahlers
-
Levente Polyak
-
noodle
-
Oskar Roesler
-
Polarian
-
tobi@tobi-wan-kenobi.at
-
tom@tswartz.net