Like I said, although it has a rich collection, a package can be > built, just like code, but still contain logical issues within the >
Well maybe I care too much about the quality of my PKGBUILD's, but is > that a bad thing? > > Every time I rush, or do not spend the time
On 4/12/23 20:55, Polarian wrote:> You don't test code by compiling it and hoping it works if it > complies successfully. True. Nobody claimed this to be what "testing" means, I think? package, such as you forgetting to move a binary into the package > during the build, and thus the package being defective. > > You can't test for this within your CI/CD. This is patently false. If you can inspect it manually, you can and should automate it. The example of a "forgotten binary" is hilariously trivial to test for, how is this even an argument? A totally naive test: `[[ -x pkg/usr/bin/foo ]]` Feel free to pimp that with `file` or some binutils magic to assert correct architecture and binfmt, but this is a perfectly fine test to start with. A conservative professional would create a tripwire-esque list of expected essential file locations and attributes, assert those en bloc, and fail if *anything* changes from that expectation, preventing the automated deployment. the package deserves it > is shipped partially or fully broken. > > Packaging takes time, you cant expect CI/CD to do it all for you. Cooking with love always tastes better, too, right? Sarcasm aside: How can you so obviously tell us that your process is broken, yet so adamantly insist on continue doing it that way? The whole point of test automation is to have a reliable, reproducible, and cheap process without random (human) errors. Solid system integration tests are very rigorously defined lists of steps and assertions to do and verify. Sounds a lot like something a computer would be good at, right? To prevent any misunderstandings: automated promotion of any artifact dropping out of a CD build pipeline without a 100% green, sufficiently comprehensive, automated test stage is a broken abomination. Period. Using namcap and adding package-specific smoke tests, like checking the exit code of running `foobin --version` in a clean Arch chroot, plus some filesystem plausibility checks beyond the generic namcap tests, plus any regression tests to make sure a previously found issue with the package doesn't resurface - that's a good baseline for a packager. *Functional* testing of software is upstream's business. A packager is first and foremost a system integrator, making sure the software fits snugly into the distribution-specific nest by, f. ex., adding systemd units that are not supplied by upstream for every distribution under the sun, and by _at least_ testing if a current snapshot of Arch is suitable to execute the binaries. Fun fact: nobody can tell if your well-tested AUR package even builds a day later, when Arch replaces glibc with µclibc, punts systemd entirely, and switches the binfmt from ELF to COFF. Yeah, not reasonable, but you don't seem to be aware how little stability Arch actually offers, and how much of it only stems from a lot of people sticking to many, many conventions, and knowingly accepting that things may break™. Funnily enough, I could easily run automated tests against a daily refreshed chroot with nightly builds, to get an early warning of such issues. Not going to happen without automation. Well, I deleted three pages worth of rant. :) Please do not spout automation FUD any more, it's painful. Please get over the D-K peak first. I'm out, touching some grass. Peace.