15 Oct
2013
15 Oct
'13
9:37 p.m.
Hi, all. I'm running a small fileserver that has three SATA drives set up in RAID5 via mdadm. That RAID holds one LVM pv which is split up into several logical volumes. This setup has worked fine in the past, but with the lastest system update my LVM partitions are not getting discovered correctly, leading to the boot hanging until I manually run "vgchange -ay". After that, the boot proceeds as normal. It would appear that the latest lvm2 package is what causes the issue. Downgrading to 2.02.100-1 boots fine, whereas 2.02.103-1 hangs. So how exactly should I proceed from here? I'm trying to understand how systemd makes it all work together, but I'm rather confused by it all. Thanks, --Sean Greenslade