[arch-general] Installation: How to get HDD > LUKS > GPT working in a clean way

Yaro Kasear yaro at marupa.net
Sun Nov 27 20:41:41 UTC 2016


On 11/27/2016 09:48 AM, Merlin Büge wrote:
> Hey everybody!
>
>
> I'm currently installing Arch on my laptop (Thinkpad T400), and have decided
> for a rather unusual partition scheme: A single LUKS container directly on
> the disk (SSD) with a GPT partition table and two partitions inside it: one for
> SWAP, the other one for the system and everyting else, formatted with Btrfs.
>
> The laptop runs libreboot, so I have GRUB2 as a payload inside the flash chip
> which I use to decrypt the LUKS container and load a GRUB configfile
> located at /boot/grub/grub.cfg (generated by grub-mkconfig). This works fine.
>
>
> While experimenting with GPT inside LUKS before the installation I've noticed
> two issues, with at least one of them being present also after installation:
>
> First, after unlocking the LUKS containter the two GPT partitions don't become
> visible to the kernel automatically. I have to manually do
>    partprobe /dev/mapper/<dmname>
> to inform the kernel about the two new partitions. partprobe is part of parted.
> My idea was to create a custom hook just after the 'encrypt' hook, which would
> simply run the above command. I tested this and it seems to work.
>
> Question:
> Is there an even simpler solution to that problem? For example an alternative
> to partprobe which is already in 'base'?
>
>
> The second issue was that I could not (after unmounting the Btrfs partition and
> deactivating the swap partition of course) directly close the LUKS mapping via
>    cryptsetup luksClose <dmname>
> It gave me:
>    device-mapper: remove ioctl on <dmname> failed: Device or resource busy
>    [...]
>    device-mapper: remove ioctl on <dmname> failed: Device or resource busy
>    Device <dmname> is still in use.
>
> Instead, I had to remove the partition mappings first via
>    dmsetup remove <dmname>1 <dmname>2
> This was getting me rid of the aforementioned error messages.
>
> As expected, I get these error messages also during system shutdown -- but only
> whith the shutdown hook in initramfs. Without it, I presume the system does not
> even try to close the LUKS container (which would make sense, since there is no
> initramfs created by default for shutdown afaik), therefore also resulting in
> no error messages being shown.
>
> What could I do about this?
> I'd like to have my system closing the LUKS container correctly -- therefore I
> need to remove the partition mappings before that.
>
>
> I've read a lot in the last days and weeks about Btrfs, SSDs, coreboot, etc. to
> make sure I don't run into many issues. Though these two don't come unexpected,
> I don't know how to solve the latter one, because systemd shutdown and shutdown
> initramfs are still a little miracle to me...
>
> I'd really appreciate any help!
>
> This is all on an up-to-date vanilla 4.8.10-1-ARCH.
> I attached two shutdown logs with debugging enabled: one with and one without
> the shutdown hook applied. They look very similar though. (I made sure to reboot
> twice after building the initramfs before taking the shutdown log.)
> My HOOKS array is:
>    HOOKS="base udev autodetect modconf block keyboard keymap encrypt pp \
>    filesystems fsck shutdown"
> (pp being the hook which runs partprobe against the mapped LUKS container)
>
>
> Best Regards,
>
> Merlin
>

Hi Merlin,

I'd set up two partitions: Your EFI system partition and the LUKS 
container. Then inside LUKS, format the whole thing as LVM and then set 
up from there, rather than make the LUKS container another GPT "disk." 
Then you just use the crypt and lvm2 hooks.

You should only really use partition tables on a physical disk, in my 
opinion, not a LUKS container.

The reason for this is that LVM works with a lot more flexibility and is 
more readily automated than trying to get the system to re-read 
partition tables.

If you were on a system where you could add disks, I'd even suggest 
reversing it: LVM on the metal and LUKS on a logical volume spanning the 
whole VG, that way you could grow the whole thing across multiple disks 
pretty easily without having to make any dramatic changes. I know btrfs 
can do multiple disks, but I've always preferred how LVM does it, honestly.

Yaro


More information about the arch-general mailing list