[arch-dev-public] [signoff] kernel26-2.6.24.3-6

Eric Belanger belanger at ASTRO.UMontreal.CA
Wed Mar 26 11:46:48 EDT 2008


On Mon, 24 Mar 2008, Tobias Powalowski wrote:

> Am Montag, 24. März 2008 schrieb Simo Leone:
>> On Mon, Mar 24, 2008 at 07:28:13PM +0100, Tobias Powalowski wrote:
>>> Users are happy with the new ISOs, just read the Forum thread about it.
>>
>> You know what. I don't care if the users are happy with the new ISOs.
>> That's right, I finally said it.
>> _I DON'T CARE_
>>
>> I don't care because _I_ am not happy with them. As someone who can see
>> that from a technological standpoint, it's a marvel that they even work,
>> that is, as a software developer, I'm ashamed to be associated with such
>> a shoddy product.
>>
>> I've offered alternatives, hell I've spent a lot of time offering
>> alternatives, built on more solid software enginerring principles
>> than "Users are Happy", but no one around here, save Dan and Aaron, who
>> just happen to be code contributors, seems to give a damn. What's up
>> with that?
>
> You never started to create ISOs nor you wanted to create them.
> This topic here is about kernel26 signoff and i would be fine if people would
> stay on topic.
> greetings
> tpowa
>
>

It works fine on i686 here so signing off for i686.

For x86_64, it booted fine but I received these error messages in my 
terminal while building a package:

Message from syslogd at ovide at Tue Mar 25 23:06:08 2008 ...
ovide kernel: Oops: 0000 [1] PREEMPT SMP

Message from syslogd at ovide at Tue Mar 25 23:06:08 2008 ...
ovide kernel: CR2: 0000000000001128

Message from syslogd at ovide at Tue Mar 25 23:13:25 2008 ...
ovide kernel: Oops: 0000 [2] PREEMPT SMP

In the dmesg trace, there was mention of unionfs. I don't have the trace 
anymore because the system froze (black screen, had to reset manually) 
when I tried to build the same package which I had previously interrupted 
as it seemed to have stalled.

On reboot, dmesg showed message of unused inodes, probably from the 
journal, and since then I am 
having continual 100+MB disk IO and 20% CPU use from the kernel raid 
related process. Downgrading kernel didn't fixed that so I guess my raid 
array has some problems (update: I ckecked my dmesg more carefully and 
it's definitely raid issue). I don't know if the HD problem is the cause 
or result of the kernel/system freeze.

OT: does anyone has experience with raid arrays? Let me know if you know 
how to fix that so I won't need to google.

Snippet from dmesg (which seems that it's currently syncing itself so I 
think I just need to wait and it'll fix itself on his own. I definitely 
need to RTFM about raid ;) :

md: bind<sda1>
md: bind<sdb1>
md: bind<sdc1>
md: raid1 personality registered for level 1
raid1: raid set md0 active with 3 out of 3 mirrors
md: bind<sda2>
md: bind<sdb2>
md: bind<sdc2>
raid1: raid set md1 active with 3 out of 3 mirrors
md: bind<sda3>
md: bind<sdb3>
md: bind<sdc3>
md: md2: raid array is not clean -- starting background reconstruction
xor: automatically using best checksumming function: generic_sse
    generic_sse:  7942.800 MB/sec
xor: using function: generic_sse (7942.800 MB/sec)
async_tx: api initialized (async)
raid6: int64x1   2851 MB/s
raid6: int64x2   3500 MB/s
raid6: int64x4   3576 MB/s
raid6: int64x8   2800 MB/s
raid6: sse2x1    3919 MB/s
raid6: sse2x2    5225 MB/s
raid6: sse2x4    5383 MB/s
raid6: using algorithm sse2x4 (5383 MB/s)
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
raid5: device sdc3 operational as raid disk 2
raid5: device sdb3 operational as raid disk 1
raid5: device sda3 operational as raid disk 0
raid5: allocated 3226kB for md2
raid5: raid level 5 set md2 active with 3 out of 3 devices, algorithm 2
RAID5 conf printout:
  --- rd:3 wd:3
  disk 0, o:1, dev:sda3
  disk 1, o:1, dev:sdb3
  disk 2, o:1, dev:sdc3
md: resync of RAID array md2
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 
KB/sec)                for resync.
md: using 128k window, over a total of 243015168 blocks.


Thanks,
Eric
-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



More information about the arch-dev-public mailing list