Hello, Sorry for the late response, my server had a hardware failure and I lost my email for a few days. I was looking around the AUR, and I have realised almost all packages on github are using CI/CD to automatically bump and push to the AUR. Anthraxx and Jelle, if you are going to enforce this what actions should be taken against these packages? I have already submitted a issue on the github repository of one of them, to relay these wishes, but I am not a TU and have no power to enforce it, find the link to this issue below: https://github.com/funilrys/aur-rocketchat-server/issues/7 I believe https://wiki.archlinux.org/title/AUR_submission_guidelines should be updated to reflect the wishes highlighted, but as noodle <silentnoodle@cock.li> has highlighted, what are we meant to do to keep this under control, because right now its completely out of control. Replying to Oskar <oskar@oscloud.info>
I think I have to chime in hear another time, to at least give a bit more context to the situation I faced, because this discussion is missing a lot of context about which problems people may run into and solve with more automation. (At least we, the ros-melodic on AUR people,) never had fully automated generation of new PKGBUILDs, you'd still had to confirm a diff and then push it to the CI/CD. There are large frameworks out there that aren't maintainable without automation and their automation also does not have the problems mentioned here.
Having automation is not the issue, it is when the automation is never reviewed, and is automatically pushed to the AUR.
How do you even mean this? If I take this literrally, you'd have to untar the pkg after building it and dig through all the files. I bet you not doing that. A machine tool is much more efficient and less error-prone on this, that's why namcap exists. When pkg builds fine and namcap doesn't warn about anything in the CI, what has been left out?
It was an example, obviously it was a bad one and I should have spent more time thinking about the example to use. The point I was trying to make is that human beings are still needed to review the changes.
At least in the case of ROS, pkgs provide extensive metdata via rosdep for their own build system. We parsed the metadata and things such as license or dependencies got autogenerated. If there are any incompatibilities with Arch, well then the build will fail in the CI, no mirroring to AUR will happen, why do you even come up with this? It's just wrong.
I have wrote this several times now: Your job is not just to throw together a PKGBUILD, fling a licence onto it and push it to the AUR and call it a job done, you must do the following: - Review the licences, is it dual licenced? - Has the licence changed? - What does the change log say? Is there any incompatibilities? - Has the packaging procedure changed? - Does the package use a different build tool now? - Are there new dependencies? - Are there outdated dependencies which need removing? Jelle provided a few of these, in case you wanted a more complete list I have listed some addition ones there. Jelle is a TU and know what they are talking about, so why do you continue to try to prove Jelle wrong and continue advocating for automatic pushing to the AUR when it has been highlighted, by both Jelle (A TU) and Anthraxx (The leader of Arch Linux) that automatic pushing to the AUR is not allowed.
This ignores all cases you personally haven't run into. I didn't maintain ~400 packages to flex, feel good or smth similar, I did get them under one umbrella again because in the time multiple people only cared about a few parts of the framework, it was all in the "worked for me this one day and I haven't updated since then" state. Maintainers wildly updated when they had time, breaking pkgs that depended upon the old version. Once I overtook the majority of ROS melodic packages on the AUR, I mirrored them to Github, to allow people to propose contributions without having direct access. I couldn't spent 20 hours in a week where a new revision of the core parts were released. In the beginning, I tested every contribution on my machine and suprise - this led to issues because I didn't had the 200 deps built in clean envs lying around somewhere. We then developed a CI/CD systemd that automatically rebuild the whole dependency chains of packages, testing everything against freshly built packages and not some packages that were still ABI compatible out of sheer luck.
Firstly, I feel this has went from a discussion to slander at this point. It feels like you are trying to prove my point is invalid because you maintain more packages than me, I rather know that the packages I maintain are kept to a high standard, I will not update a package until it has been built and tested within a clean environment, by a human being, yes it takes more time, but I know when I push that commit (manually) to the AUR, the build is reproducible in ANY clean environment, I don't think you could say the same. It is quite simple, Arch Linux is a community, you donate your time. I am sorry but if you do not have time, then you can't donate it, using CI/CD to "save time" is like trying to pop money out of thin air to give to Arch Linux, it just isn't possible. So yes, if you can not take the workload, then give up some of your packages, its unfortunate, but life takes priority over contributing to a community.
Later on, we started using a python script parsing rospkg metadata and updating the PKGBUIlDs with find+replace, so that patches we applied on some of the packages weren't deleted. It's cmdline tool and you still had to confirm the diff, but that's it. With those 3 things, github+CI+updatehelper script we finally got the ROS melodic packages into a state where ROS melodic was working permanently and not only 2 weeks after some maintainers had spent a lot of manual time. We were never able to test the whole framework manually, as no one would ever use all the non-core parts or have a > 10.000 line codebase to test all the packages after installation, but with GH we were able to fix issues of users of some obscure package quickly, even if they hadn't figured the solution and submitted a PR theirselves.
As I have said, automation is not the problem, sure having CI/CD check for common mistakes, or do some tests are good, but you must still test it manually, read the change logs, do all the things I have listed in the above, otherwise you are not properly maintaining the package. Replying to tobi <tobi-wan-kenobi.at>
I rarely ever join these discussions, so please bear with me, should I breach the code of conduct somehow.
Nice to meet you, and I doubt it, the code of conduct is basically just "don't be an arse", with a few other conditions :P
(And yes, I am fully aware I will live to regret this mail)
I doubt it, sharing your opinion shouldn't be something you should be worried about, even if it drastically differs from others :)
I disagree here. Nobody tried to make the point that CI/CD can spot all errors. But I assume and hope your point is also not that a human can do so. I know *I* cannot.
Well, that is quite unfortunate. Yes, we don't always notice errors either, but I have more faith in a human being noticing an abnormal error, or something unexpected, rather than an automation task.
Automation and CI/CD is an excellent choice for everything that *can* be automated. Your example of "does it compile on Arch"? 10/10 for automation. Reproducible and a binary result. Why not automate that check?
I never said automation was bad, the point of the conversation was fully automated packages, ones which automatically bump their versions and push automatically to the AUR, both Anthraxx and Jelle have both highlighted this is not allowed.
Is that an official statement, or your personal opinion? It sounds opinionated to me. I had a quick look at the AUR submission guidelines, but was unable to find anything on automation or required pre-update checks, at all. If you can point me to relevant documentation, I'd appreciate that a lot.
Depends which section. About the part of maintainers looking into the package they are packaging, no that is official, I do not remember where, but I will quote Jelle here (A TU and thus official :P) to cover my arse:
Package updates always require some sort of manual review, you can't blindly bump a package to the next version as you might encounter:
* The license changed * Dependencies changed or there are new dependencies * Etc.
As for automation, that is the entire reason this thread was started, J.R did not know whether this was allowed or not, and that is what we are discussing, whether it is allowed or not (which Jelle and Anthraxx has ruled no on) and whether it is a good idea, which I gave my opinion on, which I am permitted to do.
Also, I think it's accepted that the AUR has varying degrees of quality and the requirements are less strict than for core repositories.
I highlighted this in previous emails, the AUR is very permissive when it comes to the PKGBUILD standards compared to official repositories, but this is no an excuse and we should strive to keep it as compliant to the guidelines and conventions as possible, after all the TUs might want to move the package into the official repositories, in that case the PKGBUILD must be a high quality, the nicest thing you can do is make the move as seemless as possible for the TU by keeping the PKGBUILD to a good standard.
Is that a good thing? Depends, if you want a large package base, then yes. If you want a staging area for packages that *maybe* eventually get migrated into a core repo, also yes. If you want packages of the highest quality, then no. But IMHO, having a "vetting area" for packages makes tons of sense.
Just because it is not as strict doesn't mean we should just be lazy and not bother to produce high quality PKGBUILDs. After all, the AUR is a sandpit to build packages which could later be adopted, or you could become a TU later down the line. Look at the TU application requirements, you are expected to maintain high quality packages, so yes it does matter: https://wiki.archlinux.org/title/Trusted_Users
Should quality be high in the AUR as well? Ideally yes, but welcome to the real world. The AUR packages are maintained by volunteers, who are not paid and who have a life outside the AUR (hopefully). I have the highest respect for people that maintain hundred or thousands of packages for the benefit of others, and if automation makes their lives easier - more power to them! I'd rather have a package that breaks occasionally and needs manual intervention *then*, rather than not having that package in AUR at all. But then: Take that with a grain of salt, it's personal opinion, not canon.
TU's are not paid either, nor are the Arch Developers, it is entirely voluntary, thus this argument is invalidated. We have lots of people who want to contribute, saying that the package will not be in the AUR if the guidelines were too strict means nothing. When a package is not in the AUR people feel the urge to push one, I pushed packages for software to the AUR because there was dead or non-existent packages for software listed within the ArchWiki.
Really, all of this is about *tradeoffs*. If I have a fully automated CI/CD pipeline for my AUR package, and it breaks once every two years, is that good enough? In my book, absolutely, but your milage may vary, and you are welcome to maintain your AUR packages differently.
This violates the rules outlined by Anthraxx, please see the following quote from a later email from them:
If you specifically seek for an official answer:
Jelle hasn't used his official e-mail address, but there are two messages in this Thread from Jelle and me: - Jelle van der Waa <jelle@vdwaa.nl> - Levente Polyak <anthraxx@archlinux.org>
Both of which you can consider official.
Please deactivate your automation and fix the workflow. Thank you very much for your efforts and contributing to our community.
again this should be added to the AUR guidelines, I will bring this up with the ArchWiki team tomorrow to get their opinion on it, maybe a TU can give their opinion on the addition?
Other maintainers have different situations and requirements. This finally is my main point: There is no simple "this is the right way to do it, and everything else is wrong". Rather, a lot of those decisions are situative and subjective. I don't believe in simple answers and "one size fits all" solutions in software engineering. There's just various degrees of broken-ness 😄
Although the point is valid, there is no one correct way, we have guidelines, and they should be adhered to. A disordered community is a useless community, I will quote Erus here as I liked it :P "Arch is a small community, we all must pull the carpet in the same direction" ~ Erus (this was paraphrased from memory) It makes a valid point, if we are all writing packages in different ways, we are pulling the carpet in complete other directions and we get nowhere, we must keep going in the same direction, this is the exact reason why Anthraxx is the elected leader of Arch Linux, they decide the path moving forward, and they ruled. No TUs seem to want Anthraxx toppled from power, as no vote to remove Anthraxx has been conducted, as far as I am aware at least. Which therefore means, what Anthraxx says goes, they made the official call so we must follow!
My 2c: I will only switch from an automated workflow to a manual one if I see benefits to it. For my personal use cases, the automation is much less error prone than I am, and much more reliable (it never gets tired of checking the same old corner cases). Asking for 100% reliability is IMHO not realistic, in any of those scenarios.
Humans make mistakes, even the core repositories do not have 100% reliability, it is rolling release after all, but that is why the Testing Team exists, and it seems to work (and I find it fun testing the software and signing off, no clue why its just soothing of sorts). We should always strive for 100%
Thank you very much for reading until here,
Wouldn't be fair if I did not give the time to read an email which you probably spent ages typing. Sorry for the long email, but I wanted to try to answer everything I have missed over the last few days, without causing massive amounts of noise within the mailing list. Have a good night, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev