On Wed, Oct 16, 2013 at 10:55:43AM +0200, Thomas Bächler wrote:
Am 15.10.2013 21:37, schrieb Sean Greenslade:
Hi, all. I'm running a small fileserver that has three SATA drives set up in RAID5 via mdadm. That RAID holds one LVM pv which is split up into several logical volumes. This setup has worked fine in the past, but with the lastest system update my LVM partitions are not getting discovered correctly, leading to the boot hanging until I manually run "vgchange -ay". After that, the boot proceeds as normal.
It would appear that the latest lvm2 package is what causes the issue. Downgrading to 2.02.100-1 boots fine, whereas 2.02.103-1 hangs.
So how exactly should I proceed from here? I'm trying to understand how systemd makes it all work together, but I'm rather confused by it all.
Are you assembling RAID and LVM in initrd? If so, what's your HOOKS line in mkinitcpio.conf?
I've seen reports like this before (although most people said they were fixed with updates and had problems before 2.02.100). Sadly, I could never reproduce it, so I don't know how to debug it.
I can say with certainty that the madadm assembly happens in the initrd, but I can't find any log messages pertaining to the LVM scan, even on successful boot. There is the following line that occurs before the root pivot, and which is the line that breaks the boot with the latest lvm2: Oct 15 17:01:14 rat systemd[1]: Expecting device dev-mapper-raidgroup\x2ddata.device... Here's my mkinitcpio.conf lines: (works with downgrade, not with current): MODULES="dm_mod" HOOKS="base udev mdadm_udev autodetect modconf block lvm2 filesystems keyboard fsck" --Sean