Hello, I think I have to chime in hear another time, to at least give a bit more context to the situation I faced, because this discussion is missing a lot of context about which problems people may run into and solve with more automation. (At least we, the ros-melodic on AUR people,) never had fully automated generation of new PKGBUILDs, you'd still had to confirm a diff and then push it to the CI/CD. There are large frameworks out there that aren't maintainable without automation and their automation also does not have the problems mentioned here. Am Donnerstag, 13. April 2023 17:44:44 CEST schrieb Polarian:
The issue is people want fully automated maintaining of AUR packages, which is just not possible. And this entire thread has been trying to prove CI/CD is intelligent enough to spot ALL the errors which could occur during a build and say that CI/CD can check them all, which is just not possible.
How do you even mean this? If I take this literrally, you'd have to untar the pkg after building it and dig through all the files. I bet you not doing that. A machine tool is much more efficient and less error-prone on this, that's why namcap exists. When pkg builds fine and namcap doesn't warn about anything in the CI, what has been left out?
I think people are forgetting that as a Maintainer you are meant to look into each update, read the change logs, check if there is any incompatibilities, patch out anything which is non-free (if possible) or patch any issues with the source which might not compile on Arch Linux for whatever reason, check new dependencies, check if dependencies have been removed, check if the compile procedure has changed, and thus the PKGBUILD will need rewriting, the list goes on.
At least in the case of ROS, pkgs provide extensive metdata via rosdep for their own build system. We parsed the metadata and things such as license or dependencies got autogenerated. If there are any incompatibilities with Arch, well then the build will fail in the CI, no mirroring to AUR will happen, why do you even come up with this? It's just wrong.
Sure a PR to bump the release is always nice, but it should be checked against all the things I have named above, and then merged, then you push it manually.
If you don't have the time for the above? try maintaining less packages, or try finding more time, there is no cheat code to maintaining packages.
This ignores all cases you personally haven't run into. I didn't maintain ~400 packages to flex, feel good or smth similar, I did get them under one umbrella again because in the time multiple people only cared about a few parts of the framework, it was all in the "worked for me this one day and I haven't updated since then" state. Maintainers wildly updated when they had time, breaking pkgs that depended upon the old version. Once I overtook the majority of ROS melodic packages on the AUR, I mirrored them to Github, to allow people to propose contributions without having direct access. I couldn't spent 20 hours in a week where a new revision of the core parts were released. In the beginning, I tested every contribution on my machine and suprise - this led to issues because I didn't had the 200 deps built in clean envs lying around somewhere. We then developed a CI/CD systemd that automatically rebuild the whole dependency chains of packages, testing everything against freshly built packages and not some packages that were still ABI compatible out of sheer luck. Later on, we started using a python script parsing rospkg metadata and updating the PKGBUIlDs with find+replace, so that patches we applied on some of the packages weren't deleted. It's cmdline tool and you still had to confirm the diff, but that's it. With those 3 things, github+CI+updatehelper script we finally got the ROS melodic packages into a state where ROS melodic was working permanently and not only 2 weeks after some maintainers had spent a lot of manual time. We were never able to test the whole framework manually, as no one would ever use all the non-core parts or have a > 10.000 line codebase to test all the packages after installation, but with GH we were able to fix issues of users of some obscure package quickly, even if they hadn't figured the solution and submitted a PR theirselves.
In simple terms, automation is good, using it carelessly is bad. Well even carelessly it isn't bad, someone should always check whatever CI/CD task which is executed. I feel developers rely too much on CI/CD these days and want to write a script to do everything, without realising that their intervention is still needed.
Can you elaborate this further with an example? The whole statement is so general, I don't know what you what to imply with that? Doesn't it even contradict itself in the beginning? CI/CD was invented because pushing from dev machines into prod had issues. AutoUpdateBot may be harmful because there isn't a quick manual overlook and it ignores everything except updates version, pkgrel and checksums. Heck, it doesn't even try to build the pkg before pushing. But all those problems are problems with AutoUpdateBot, not with (semi-)autoupdaters and definately not with "CI before actual push" systems. So to the ppl with an extreme position on this, please adopt a more differentiated view on the general topic. I stopped doing ROS melodic packages because the SW got older and a newer ROS release had been adopted by the majority, while I had to backport more and more libraries to keep it running. Regards, Oskar