[arch-general] mdadm: RAID-5 performance sucks

Rafa Griman rafagriman at gmail.com
Tue Apr 9 20:35:00 EDT 2013


Hi :)

On Tue, Apr 9, 2013 at 6:16 PM, Karol Babioch <karol at babioch.de> wrote:
> Hi,
>
> I've got four HDDs and added them into a RAID-5 array, which basically
> works just fine. However the performance sucks quite hard. I get only
> about 70 MB/s when it comes down to reading and writing speeds of about
> 35 MB/s.
>
> The hardware itself is quite decent: Intel Xeon E31260L, 8 GB RAM and
> four Samsung HD204UI's. With the same setup and a Ubuntu live
> environment I get about 270 MB/s reading speed and 130 MB/s writing speed.
>
> Therefore there must be a fundamental difference. I've compared various
> values from sysctl between both environments and set them accordingly,
> but to no availability.
>
> Anyone else experiencing the same issues and/or can you point me in the
> right direction?
>
> At least I would like to saturate my Gigabit Ethernet connection.


What filesystem? What "benchmark" are you running? Stripe size? HDDs
connected to a PCI(e|X) RAID controller or on board controller? Those
70 MB/s and 35 MB/s are from a client node or local? If they're from a
client node, what are you using: FTP, SAMBA, NFS, ...? Config files?

   Rafa


More information about the arch-general mailing list