Am 15.10.2013 21:37, schrieb Sean Greenslade:
Hi, all. I'm running a small fileserver that has three SATA drives set up in RAID5 via mdadm. That RAID holds one LVM pv which is split up into several logical volumes. This setup has worked fine in the past, but with the lastest system update my LVM partitions are not getting discovered correctly, leading to the boot hanging until I manually run "vgchange -ay". After that, the boot proceeds as normal.
It would appear that the latest lvm2 package is what causes the issue. Downgrading to 2.02.100-1 boots fine, whereas 2.02.103-1 hangs.
So how exactly should I proceed from here? I'm trying to understand how systemd makes it all work together, but I'm rather confused by it all.
Are you assembling RAID and LVM in initrd? If so, what's your HOOKS line in mkinitcpio.conf? I've seen reports like this before (although most people said they were fixed with updates and had problems before 2.02.100). Sadly, I could never reproduce it, so I don't know how to debug it.