Hello,
Well what specifically are you planning to package? I don't think you've mentioned that yet. Abstract discussions lead to too many unspoken assumptions.
I was planning to package esc (go) because it was a makedepends for another go package I adopted, but for whatever reason the Makefile for that package is broken and upstream seems abandoned, therefore I won't bother packaging esc as it is also an old tool which is no longer used. I sent the initial email before I found a work around for the package by just calling go build directly.
Seems like a lot of devs are anti-distro, but I personally would not like bespoke and nontested package managers vomiting all over my filesystem and leaving untrackable files.
Well, that still happens with some packaged applications, especially non-free, they will vomit all over your home directory. Also I believe devs are anti-distro due to the fact that cross-distro packaging has taken off so much. Appimages are used a lot in the AUR now, and I know a a few people who don't even use pacman for anything more than installing flatpak and then installing all their software through flatpak. There is even distributions coming out now which rely solely on cross-distro packaging, only the base system is packaged. Ubuntu has been moving towards snap packages and flatpaks taking over the distro for years now. It has its benefits, flatpaks allow the developer themself to package the software, and they know no matter what distro it is, the package should work as it bundles the exact version of the dependencies, which also stops dependency version conflicts when you rely on older dependencies. Personally I am for distro packaging, and I am also for installing software into the filesystem. I dislike the concept of having huge isolated packages, and the idea I have to use multiple tools to update my system when with distro packaging its guaranteed to install correctly, and also a simple pacman -Syu updates the entire system. Another thing which is becoming more and more common is the absence of compile information. A few authors I have spoken to prevent you from building the source code, with certain rules such as: "This source code is provided for transparency, contributions are not welcome and you are not permitted to compile from source, please use the binary in releases" I have had conversations with these devs and the reason they list them is the following: - Dislike of the concept anyone can use their code, this is hilarious because they advertise themselves as open source but are against the idea of it being open, in other words they are more source available than open source, but like anyone sees a difference between this anymore, people keep telling me minecraft is open source becauase its written in Java menaing they can read the source code, which is wrong. - Dislikes distributions, yes this one is a big issue. A few devs now hate when I come along and found x build issue because their software doesn't build in a reproducible way, and they simply say that they do not want the package distributed by anyone other than them because they want full control. They use proprietary distribution servers, I can't remember the url but its basically like a repository which you pay I believe $10/month and it will spit out a ppa for ubuntu and a few other distro specific packages. Why be open source if you are going to fight contributions? Other devs simply state that they want their software to be managed by their buildin tools, a lot of these are unfortunately proprietary. I am seeing more and more software use proprietary "autoupdaters" which is considered by the privacy conscious community as a breach of privacy, you never want an application calling home without your permission, let alone allowing it to install whatever it likes to your system. - Time for the worst one, copy and paste install scripts. These are still quite popular but seem to be dying due to the introduction of new cross-distro tools. Remember those times when you copy a curl "https://some.url/some_script.sh" | /bin/bash ? Does anyone actively check these scripts for malicious intent? So many sysadmins just trust the author and will run these scripts as root as they vomit all over your filesystem and installing the application in the least conventional way possible, because the developer obviously never heard about the linux filesystem hierarchy and where different files are meant to be stored. These devs will not support your idea of distro packaging and will simply ask you to use their copy and paste script, which I wouldn't be surprised if it grabs your system information and makes a HTTP post request to some random URL while its doing its thing. Its becoming a right pain in the arse now, because you go to install a package from the AUR and find out that its literally just dumping the contents of upstream into /opt because upstream doesn't support packaging. Arch Linux philosophy is to keep it simple, but packaging is no longer simple, of course there is still some C/C++ software which is really nice, it uses dynamic links to easily link it against the current libraries provided by the distro, it can be built with a Makefile or cmake with ease, and it spits out a binary which can be easily installed into /usr/bin, the compiler doesn't call home for updates, the compiler doesn't try to automatic update itself, the compiler doesn't do anything you wouldn't expect, it just does what you tell it to do, not what the dev tells it to do on your behalf. I really wish software devs get their act together and start actively supporting the people trying to package their software, but unfortunately I feel that within 5 years, arch linux will cease to care about distro packaging and will aim towards flatpaks just like most other distros are currently doing. Sorry for the rant, its just getting on my nerves the amount of additional effort which has to go into every single package because of the developer fighting every step of the way.
Arch users have the choice to use either (with consequences either way).
Using pacman is the only officially supported method of installing software, which is made clear on the archwiki, I remember editing a page with it.
static PIE exists.
I forgot about PIE, I guess the benefits of dynamic linking are decreasing.
True but you'd be surprised how often that doesn't happen.
Well, most of the time it is because software don't update their damn dependencies so a lot of distros have to package 10 different versions of the same library. At least arch has the right idea of "if it doesn't compile with the latest compiler and libraries, then its not packagable", hopefully it might be the spur which forces devs to actually update their dependencies, which is very important for preventing security vulnerabilities.
Bit of a myth, you're not using the entirety of everything you import, and linkers should properly handle removing unused symbols. Actually static linking might end up smaller this way (compared to installing the entire library as a separate package).
It depends, the binaries themself will always be bigger, but if you include the libraries installed as dependencies then yeah I assume its totally resonable for the overrall size to be bigger. But imagine if everything statically linked to glibc, that would be a lot of duplication of symbols.
Linkers have improved a lot since the last time static linking was the industry trend.
Because of the fact developers and users like a simple executable, one file they can download instead of tons of shared objects, and ensuring they are all in the loaders path etc etc. I guess there is arguments for and against, static is very useful for cross distro, as everything is bundled, while dynamic is very good for distro packaging because you link against the packaged libraries. Sorry for the long rant of an email, -- Polarian GPG signature: 0770E5312238C760 Website: https://polarian.dev JID/XMPP: polarian@polarian.dev