[arch-general] debug package repositories again

Joe Julian me at joejulian.name
Thu Aug 13 18:09:46 UTC 2015


On 08/13/2015 10:58 AM, Leonid Isaev wrote:
> On Thu, Aug 13, 2015 at 10:09:00AM -0700, Joe Julian wrote:
>>>> Bandwidth is probably the main problem, although anyone who wants to debug
>>>> will probably be fine with that.
>>> I think you guys misunderstood me. The biggest problem IMHO with building debug
>>> versions locally is not compiling itself, but setting up the environment. So, I
>>> meant that packages come with debugging enabled (compiled with gcc -O0 -g and
>>> perhaps ./configure options). This way, there will be not many new packages.
>> No, the biggest problem with building debug versions locally is that it
>> takes hours of developer time multiplied by every bug found. Why are we
>> wasting such a limited resource when it's so easy not to? Life is finite.
> That's just pure theory. In practice, the compilation itself is a minor
> inconvenience, unless you are talking about Gnome/KDE. But debugging those is
> hopeless anyway :)
>
> As a developer, you'll spend most time understanding the changes (looking at
> the code), not compiling.

I'm not some newb, maybe google me before you make condescending statements.

There is a reason the other distros publish the debug symbols. It's 
valuable and there's no valid reason not to.

>
>>> Of course, this is not a good idea for things like FF/Gnome/KDE because of a
>>> slow-down, but a performance penalty for smaller programs like vim, links,
>>> XFCE4 etc. will not be noticeable (at least I don't see any for a self-compiled
>>> xfce4 desktop on a single-core Intel Atom based netbook).
>>>
>>> Cheers,
>> What is this slow-down you keep talking about? I'm not asking to do away
>> with optimization, just give us a way to get the debug symbols without
>> rebuilding. The debug symbols are located in totally different sections from
>> the code/data sections. You can check it with objdump:
> I was under the impression that with C, -On (n > 0) is not recommended with -g.
> Now, I don't know how much -O2 (for example) speeds things up compared to -O1
> etc, but probably not much on small applications.
>
> Granted, I think any compiler-level optimization is overrated, and I never
> really saw any measurable effect of it. But I use Fortran (not C) for all my
> projects.
>
> Cheers,

Yes, if you optimize you will lose some variable data when it's out of 
scope. If it's actually about debugging and seeing what's in variables 
and stepping through code, that's something that I would likely compile 
on my own with O0. Most of the time it's about seeing where the deadlock 
or segfault is and getting a meaningful stack trace.

I've seen quite a bit of a performance increase in some cases. GlusterFS 
was being built with O0 back in the 2.0 days. Building with O2 proved 
about a 30% increase in throughput.


More information about the arch-general mailing list