On 12/04/2023 23:43, j.r wrote:
First of all: thanks for all those responses, looks like this is quite a controversial topic.
Automation is dumb, you cant expect automation to test every single possibility, it does not have a brain and it can't use common sense.
If a package installs, but for example, you forgot to copy the data over to the package, sure it will build, CI/CD will pass and this will be pushed to remote, but its still a dysfunctional package.
That's exactly what I wanted to say initially. The specific case I referred was just an example what could go wrong and this case could be spotted by CI actually, because makepkg itself fails with an error.
But if for example additional build steps become necessary between versions, but the existing build steps still work without failing CI can't spot it. The package will look good, but logically it might be completely broken.
So I personally agree with Polarian that it's the job of a good packager to do at least some basic functionality testing (e.g does the software still start) before pushing it to the AUR. What I take from the discussion, there is currently not clear policy on this topic, but maybe one should be established? Or at least the wiki could recommend it as best practice somewhere? I'm not that experienced with what the correct way of continuing this topic is, so help appreciated.
Package updates always require some sort of manual review, you can't blindly bump a package to the next version as you might encounter: * The license changed * Dependencies changed or there are new dependencies * Etc. I'd say automation is fine, if it opened a pull request which can be reviewed and tested. But blindly pushing new versions to the AUR is for me a no-go.