[arch-general] coping with damaging updates

Oon-Ee Ng ngoonee.talk at gmail.com
Thu Oct 27 23:34:50 EDT 2011


On Fri, Oct 28, 2011 at 11:02 AM, Mick <bareman at tpg.com.au> wrote:
> On Thu, 27 Oct 2011 12:30:50 -0500
> C Anthony Risinger <anthony at xtfx.me> wrote:
>
>>
>> the sources of error are likely not a thin smear of software that,
>> imo, makes the entire experience much better and enables fine-grained
>> access control ... i mean group perms are nice and all, but they are
>> incredibly coarse, and over the years have forced all sorts of
>> odd/convoluted application hierarchies + access patterns to cope.
>>
>> unfortunately, error sources are probably human, ie. stuff isn't
>> being launched/ran/used properly.
> Its not easy for the user to keep up-to-date with the stream of changes
> that pour in until one day you update and a deprecated method is busted.
>
> A thought that just occurred to me is for these kind of updates to be
> segregated so that you can see them and research the procedures needed
> to implement them safely.

Am I the only one reading this thread who sees the inherent
contradiction of wanting complete control over your system, wanting
things not to break, yet wanting software to be (relatively
frequently) updated?

The devs run the software, by the time it gets to [testing] it has
passed through their machines without (visibly) breaking anything, by
the time it gets to [core]/[extra] it has passed through the machines
of whoever runs [testing]. If a system breaks because of an update in
[core]/[extra], its because the system differs from all the previous
machines in its hardware and/or software.

To summarize, if your answer to 'what package broke it?' is simply 'I
don't know, too many packages got updated at one time' then there's no
real possible solution to this problem.


More information about the arch-general mailing list