[arch-general] mdadm: RAID-5 performance sucks
Hi, I've got four HDDs and added them into a RAID-5 array, which basically works just fine. However the performance sucks quite hard. I get only about 70 MB/s when it comes down to reading and writing speeds of about 35 MB/s. The hardware itself is quite decent: Intel Xeon E31260L, 8 GB RAM and four Samsung HD204UI's. With the same setup and a Ubuntu live environment I get about 270 MB/s reading speed and 130 MB/s writing speed. Therefore there must be a fundamental difference. I've compared various values from sysctl between both environments and set them accordingly, but to no availability. Anyone else experiencing the same issues and/or can you point me in the right direction? At least I would like to saturate my Gigabit Ethernet connection. Best regards, Karol Babioch
On 10 April 2013 00:16, Karol Babioch <karol@babioch.de> wrote:
Hi,
I've got four HDDs and added them into a RAID-5 array, which basically works just fine. However the performance sucks quite hard. I get only about 70 MB/s when it comes down to reading and writing speeds of about 35 MB/s.
The hardware itself is quite decent: Intel Xeon E31260L, 8 GB RAM and four Samsung HD204UI's. With the same setup and a Ubuntu live environment I get about 270 MB/s reading speed and 130 MB/s writing speed.
Therefore there must be a fundamental difference. I've compared various values from sysctl between both environments and set them accordingly, but to no availability.
Anyone else experiencing the same issues and/or can you point me in the right direction?
At least I would like to saturate my Gigabit Ethernet connection.
Best regards, Karol Babioch
I assume you're using Linux software RAID, although you don't mention. I've experienced this when using RAID6 and a suboptimal stripe cache size. Try tinkering with /sys/block/mdX/md/stripe_cache_size. In my case adjusting it to 8192 increased read and write speeds dramatically. Chris
Hi, I guess I was too vague with some of the details regarding the setup, so I will try to be more specific now. Am 09.04.2013 18:20, schrieb Chris Down:
I assume you're using Linux software RAID
Yes.
I've experienced this when using RAID6 and a suboptimal stripe cache size.
I already have tinkered with the stripe cache size, but probably should approach this more systematically. I've found a "tuning" script at [1], but although it increased performance on Ubuntu by a bit, it hadn't any impact on Arch :(. Am 10.04.2013 02:35, schrieb Rafa Griman:
What filesystem?
ext4.
What "benchmark" are you running?
Simple and plain "old" dd, just as described at [2].
Stripe size?
In both cases the chunk size is 512K, which is quite big, but it is primarily used for video files and works with Ubuntu just fine.
HDDs connected to a PCI(e|X) RAID controller or on board controller?
Its an onboard controller. To be more specific, this is what lspci says: 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
Those 70 MB/s and 35 MB/s are from a client node or local?
I've measured them in both cases (Arch & Ubuntu) locally, but from a client I get very similar results.
If they're from a client node, what are you using: FTP, SAMBA, NFS, ...?
I've only tried SMB so far, but given the fact that the results are bad as they are locally, it shouldn't matter here.
Config files?
Which ones? Pretty much everything is measured out of the box, partially even using the live environments, so I haven't configured too much at all. As said I've played around with the sysctl settings, but haven't configured anything else that should matter here. Best regards, Karol Babioch [1]: http://ubuntuforums.org/showthread.php?t=1494846 [2]: https://wiki.archlinux.org/index.php/SSD_Benchmarking#Using_dd
Hi :) On Wed, Apr 10, 2013 at 2:18 PM, Karol Babioch <karol@babioch.de> wrote:
Hi,
I guess I was too vague with some of the details regarding the setup, so I will try to be more specific now.
Am 09.04.2013 18:20, schrieb Chris Down:
I assume you're using Linux software RAID
Yes.
I've experienced this when using RAID6 and a suboptimal stripe cache size.
I already have tinkered with the stripe cache size, but probably should approach this more systematically. I've found a "tuning" script at [1], but although it increased performance on Ubuntu by a bit, it hadn't any impact on Arch :(.
So maybe could be an Arch issue. What about kernel versions, ...? Are they the same in Ubuntu and Arch versions you're running? Have you monitored CPU usage while you run dd? I find saidar (libstatgrab) interesting to get a glimpse of what's happening locally. sar can also be useful, ... Whole bunch of tools out there, depends on your preferences ;)
Am 10.04.2013 02:35, schrieb Rafa Griman:
What filesystem?
ext4.
What "benchmark" are you running?
Simple and plain "old" dd, just as described at [2].
Stripe size?
In both cases the chunk size is 512K, which is quite big, but it is primarily used for video files and works with Ubuntu just fine.
HDDs connected to a PCI(e|X) RAID controller or on board controller?
Its an onboard controller. To be more specific, this is what lspci says:
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
Those 70 MB/s and 35 MB/s are from a client node or local?
I've measured them in both cases (Arch & Ubuntu) locally, but from a client I get very similar results.
If they're from a client node, what are you using: FTP, SAMBA, NFS, ...?
I've only tried SMB so far, but given the fact that the results are bad as they are locally, it shouldn't matter here.
Config files?
Which ones? Pretty much everything is measured out of the box, partially even using the live environments, so I haven't configured too much at all. As said I've played around with the sysctl settings, but haven't configured anything else that should matter here.
Was asking about the config files in case you were using Samba, but I see it's not the case ;) Rafa
Hi :) On Tue, Apr 9, 2013 at 6:16 PM, Karol Babioch <karol@babioch.de> wrote:
Hi,
I've got four HDDs and added them into a RAID-5 array, which basically works just fine. However the performance sucks quite hard. I get only about 70 MB/s when it comes down to reading and writing speeds of about 35 MB/s.
The hardware itself is quite decent: Intel Xeon E31260L, 8 GB RAM and four Samsung HD204UI's. With the same setup and a Ubuntu live environment I get about 270 MB/s reading speed and 130 MB/s writing speed.
Therefore there must be a fundamental difference. I've compared various values from sysctl between both environments and set them accordingly, but to no availability.
Anyone else experiencing the same issues and/or can you point me in the right direction?
At least I would like to saturate my Gigabit Ethernet connection.
What filesystem? What "benchmark" are you running? Stripe size? HDDs connected to a PCI(e|X) RAID controller or on board controller? Those 70 MB/s and 35 MB/s are from a client node or local? If they're from a client node, what are you using: FTP, SAMBA, NFS, ...? Config files? Rafa
participants (3)
-
Chris Down
-
Karol Babioch
-
Rafa Griman