On Sat, Mar 25, 2017 at 09:19:43 +0100, Ralf Mardorf wrote:
On Sat, 25 Mar 2017 06:47:07 +0000, Xyne wrote:
A bash script should depend only on bash.
Hi Xyne,
Seems to be better it would depend on coreutilsor do you asume a bash script only depends on bash intern commands and woun't use external commands such as e.g. basename?
In such a case, yes, the application should state that it depends on coreutils. I don't see an issue, though, other than "I don't want to type that".
It is still up to the user to decide which packages to install even if base is recommended. If you don't use nano or lvm, there is no need to install those packages, for example.
A user always is allowed to customize the install. On my install are even some hard dependencies missing, but on another software level, not on the base level. The Arch community needs to share some base system. If we wouldn't, we wouldn't need Arch related mailing lists. There must be something in common to call Arch "Arch".
I don't think that the Arch community or the OS itself is solely defined by an arbitrarily selected set of packages that are expected to be installed. After all, the systems still share the package manager and the origin of the packages (created and provided by Arch devs) + the general idea of KISS (e.g. upstream as unpatched as possible, no partial upgrade support, ...) I think some people are mixing up these two things: 1. Packages that are expected to be there to guarantee a minimally working system in most situations. 2. Packages that are expected to be there to provide a minimally comfortable working environment to the user. I could live with the group 1 being mandatory (I wouldn't be happy, but I could accept it). But not group 2, because at that point, things get a little more subjective. Let me quote the Arch Wiki article on Arch Linux:
VERSATILITY:
Arch Linux is a general-purpose distribution. Upon installation, only a command-line environment is provided: rather than tearing out unneeded and unwanted packages, the user is offered the ability to build a custom system by choosing among thousands of high-quality packages provided in the official repositories for the x86-64 architecture.
I'm sorry, but I currently find myself "tearing out unneeded and unwanted packages". And in a sort of a "killing spree", I find myself removing packages from both groups 1 and 2 as defined above. I end up with something like my personal little VPS, for instance: * I won't plug in any USB devices or PCMCIA cards (usbutils, pcmciautils) * I use systemd-networkd for my static network configuration (netctl, dhcpcd) * I don't use LVM or block-level encryption (lvm2, cryptsetup) * Being on a VPS, I don't care about RAID (mdadm) * I don't intend to interact with any other file systems than ext4 (jfsutils, reiserfsprogs, xfsprogs) * I don't use nano (nano) * I use zsh as my interactive shell, and generally write my shell scripts in POSIX sh, for which I have dash installed (and linked sh to), so bash is installed --asdeps. * I don't use any of the tools in ineutils, so it's installed --asdeps. * I still keep the "normal" Linux kernel around "just in case" - but really I am booting the LTS kernel. Now, is that no longer Arch Linux? I would say Yes. But with the current policy, it appears that No. Not because I'm running unsupported software, but because I just got rid of a few things that I don't need. Same goes for my laptop setup (although for a smaller set of missing packages there). Or my little test container, where I don't need the kernel, for instance. I do not oppose the idea of having a group of packages that are *recommended* to be installed - people who don't quite know what they need can just install that group and assume that they will probably not run into too much trouble. But people shouldn't be *forced* to install those packages (or find themselves in the "not supported here!" zone).
The whole discussion is about whether or not the base group should be installed by default or at least assumed to be installed. If packages will not work because dependency resolution fails in the absence of unspecified base packages, then you are essentially forcing the user to install the full base group (or manually resolve deps after noticing that a package doesn't work).
Correct. What's wrong with this approach? You safely could remove nano, to free the immens amount of disc space it takes. Others, e.g. Eli seemingly do this. Unlikely a package explicitly will have a hard dependency on nano.
[rocketmouse@archlinux ~]$ pactree -r nano nano
I find a little irony in the fact that you use to pactree/pacman to show that nothing depends on nano, when the issue here is exactly that pactree/pacman becomes unreliable for packages in base.
Just in case a package really should depend on nano, then the user needs to reinstall it. I expect a user who decides to remove packages from "base" to understand what she's doing. There are valid reasons that base does include a few packages, such as nano, even if those are not necessarily required or fit to UNIX standards from the 70s.
Yes, those are valid assumptions for a "base install" as in "before the user starts configuring their system to their needs". And I'm happy that there is at least one text editor in that "base install". But once I'm done and have switched to vim or emacs or atom or whatever else I happen to use, I might feel different about that. (Just to clarify, I don't get all this hate on nano. It's a fine editor for people who just want to quickly edit something from a non-graphical environment and don't want to be bothered learning the quirks of vi or emacs.) Of course, nano is tiny, and there is no harm in keeping it around. But together with all the other packages we don't need, it starts feeling like clutter. Especially if I have to mentally filter out those packages from the output of `checkupdates` before I decide whether I want to run that update, or leave it because it only affects packages I don't use. Best, Tinu