On Sun, 2020-06-07 at 19:11 +1000, Phillip Smith wrote:
Please relax, I've a RAID5 of 6x8TB since years.
Have you had to replace a disk and do a rebuild?
Yes, unfortunately it's happen. Most of time it's only bad sectors one drive at a time.
IIRC the rebuild time was around 20h, depending on the pressure on the RAID.
I've a timer with a full mdadm check every month, which scan the whole array.
The most soliciting situation is when I upgrade the disks; I have to run 6 rebuild in a row before reshaping.
It's obviously worse with consumer drives compared to enterprise drives - I'm not sure what Hetzner provision though.
Choosing your disk set and a controller chipset are key decisions for sure and depends a lot of what you want to do with your array, but there is a lot of marketing in the last years around differences between disks which is sometimes only minor firmware tweaking or locking (like scterc, which is useful for RAID arrays) .
On the softraid vs btrfs raid question, my feedback is better with md.
I had several issues with btrfs raid over years[1].
Note that btrfs raid56 is not recommended for metadata and there is still a write hole issue[2].
That said, I've a 6x6TB btrfs RAID5 running well since 1 years and the last disk replacement was smooth.
Regards,
Sébastien "Seblu" Luttringer