[arch-general] Tobias Powalowski and his nonsensical maintenance decisions

Carsten Mattner carstenmattner at gmail.com
Fri Apr 28 16:29:53 UTC 2017


On Fri, Apr 28, 2017 at 12:40 PM, fnodeuser <subscription at binkmail.com> wrote:
> Tobias Powalowski,
>
> you continue to place pkgs in the testing repo that do not require
> any further testing.
>
> for what reasons, exactly, do the linux 4.10.13 and hwids 20170328
> pkgs need to be in testing for 4+ days?
>
> also, you did not replace git:// with git+https:// in the hwids
> PKGBUILD file. github has HTTPS enabled for everything.

I have to get this off my chest, since the so called stable and lts
kernel branches have failed to deliver what their name may promise.

fnodeuser, I understand why you'd want kernel stable and lts updates
to be pushed to core quicker, but the reality is that the criteria for
what patches land in stable and even lts branches is lax. Like Florian
said, stuff breaks in stable in lts kernel releases regularly. I
myself don't understand why it's called stable or lts stable queue
when it isn't strictly critical fixes, but I'm not the maintainer of
the branches and may be missing the point.

<A small rant>

If you want a more stable kernel you can choose to use older lts
branches like 4.1 or 3.16. Those get fewer updates.

I mean, it doesn't help that Greg KH always appends the note

  All users of the 4.10 kernel series must upgrade

while a stable kernel release adds regressions and random
refactorings, not just critical fixes. It doesn't make sense to me,
but the developers surely have their reasoning and customers to prove
it makes sense.

The constant churn of refactorings and whatnot makes it impossible for
all the hardware that say i915 supports to actually work reliably
across kernel releases. What used to work flawlessly in 4.1 can be
broken in 4.4 because the devs do not test with Intel GPUs older than
Gen7 for example, all the while claiming it's supported in the now
refactored but practically untested code.

It's not surprising that places with many Linux workstations run
CentOS (Pixar) or Scientific Linux (CERN) or Ubuntu oldest LTS (Google).

The main cause for the breakage is the linux kernel's desire to be
monolithic and carry all drivers in-tree as much as possible for
easier refactoring, which makes sense for developers but pushes users
in need of stability to CentOS. The problem with a system like CentOS
is that you can hope that a service pack release backports important
new features, but you cannot pick and choose. In an OS like FreeBSD or
Windows or a microkernel based system it's much easier and common to
have core pieces that change annually or less often at best and have
few modules that link against a stable ABI and can be fresh with new
features. Think nVidia's drivers on FreeBSD or Windows. Windows has
hopped onto the update often and break often train with version 10 but
they still have a stable ABI like FreeBSD.

The reality is that your hardware that worked in 4.10.3 can be broken
in 4.10.8. I regularly look at the diffs of stable releases and fail
to understand the selection process.

</A small rant>


More information about the arch-general mailing list