[arch-general] Partition mounting in systemd [WAS: Lennart Poettering on udev-systemd]
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab?
It does. Though it does not use "mount -a", but rather mounts each fs separately.
Ah, that ties in nicely with the weird symptoms I'm seeing right now. For background, you can read my recent forum post here - https://bbs.archlinux.org/viewtopic.php?pid=1146498#p1146498 but its not necessary to this question Basically as part of troubleshooting the above problem, I attempted to reformat my /home partition (/dev/sda3) on my desktop to btrfs after quitting X and stopping the related stuff. I also tried this from a new boot without ever touching X. umount /dev/sda3 worked, but mkfs.btrfs didn't, giving me 'still mounted' errors. When I boot without systemd (initscripts only) then umounting and mkfs.btrfs works fine. Related - when I run systemctl -a | grep sda I get (on my systemd laptop, but got the same on my desktop), trimmed for readability dev-sda.device loaded active plugged ST9250827AS dev-sda1.device loaded active plugged ST9250827AS sys-devi...da-sda1.device loaded active plugged ST9250827AS sys-devi...da-sda2.device loaded active plugged ST9250827AS Do I need to do something additional to get systemd to 'give up' partitions totally?
On Wednesday 15 Aug 2012 8:53:37 AM Oon-Ee Ng wrote:
Do I need to do something additional to get systemd to 'give up' partitions totally?
tell systemd not to use fsck on btrfs partition? Something like this?(pasted from my fstab) /dev/sda1 /data btrfs noatime,flushoncommit,defaults 0 0 Does that help? -- Regards Shridhar
On Aug 15, 2012 2:53 AM, "Oon-Ee Ng" <ngoonee.talk@gmail.com> wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab?
It does. Though it does not use "mount -a", but rather mounts each fs separately.
Ah, that ties in nicely with the weird symptoms I'm seeing right now. For background, you can read my recent forum post here - https://bbs.archlinux.org/viewtopic.php?pid=1146498#p1146498 but its not necessary to this question
Basically as part of troubleshooting the above problem, I attempted to reformat my /home partition (/dev/sda3) on my desktop to btrfs after quitting X and stopping the related stuff. I also tried this from a new boot without ever touching X.
umount /dev/sda3 worked, but mkfs.btrfs didn't, giving me 'still mounted' errors. When I boot without systemd (initscripts only) then umounting and mkfs.btrfs works fine.
Related - when I run systemctl -a | grep sda I get (on my systemd laptop, but got the same on my desktop), trimmed for readability dev-sda.device loaded active plugged ST9250827AS dev-sda1.device loaded active plugged ST9250827AS sys-devi...da-sda1.device loaded active plugged ST9250827AS sys-devi...da-sda2.device loaded active plugged ST9250827AS
Do I need to do something additional to get systemd to 'give up' partitions totally?
What does findmnt say?
On Wed, Aug 15, 2012 at 1:34 PM, Shridhar Daithankar <ghodechhap@ghodechhap.net> wrote:
tell systemd not to use fsck on btrfs partition? Something like this?(pasted from my fstab)
/dev/sda1 /data btrfs noatime,flushoncommit,defaults 0 0
Does that help?
Thanks for this, will try it out with systemd (not right now though) but why would an initial fsck affect unmounting behaviour? To be clear, I know how to turn off (the final 0) fsck for btrfs, that's not an issue, just wondering why my ext4 partition can't be repartition after unmounting. On Wed, Aug 15, 2012 at 1:54 PM, Tom Gundersen <teg@jklm.no> wrote:
What does findmnt say?
Right now, nothing much, I've reinstalled and am using initscripts while testing the bug in the forum post. I will reply here when I've had time to verify the bug and go back to systemd. Sorry for the delay, and thanks for all your work on Arch and initscripts/systemd.
On 08/14/2012 08:53 PM, Oon-Ee Ng wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab? It does. Though it does not use "mount -a", but rather mounts each fs separately.
[putolin] I came across another anomaly on my systemd boxes that I would like someone to verify if they could. Please do this on a backup system. I was changing some lvm partitions about that were mounted in /etc/fstab, actually I removed them and created two new lvm partitions with different names, but failed to update the fstab. Upon rebooting the systems failed to boot and where stuck at trying to mount the non existing lvm partitions. I could not fix the systems as I could not get a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting. Could some brave enterprising soul confirm this? This created the following question: Can systemd boot a system without a fstab?
I could not fix the systems as I could not get
a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting.
Wouldn't it have been easier to just start with init=/bin/bash ? Just asking, as this would have been my first attempt at solving the problem. Greetings, Christoph
On 08/15/2012 09:30 AM, Christoph Vigano wrote:
a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting. Wouldn't it have been easier to just start with init=/bin/bash ? Just asking, as this would have been my first attempt at solving the
I could not fix the systems as I could not get problem.
Greetings, Christoph
maybe, I usually just boot to a rescue cd or usb mount the root partition and go to work at it. When I break things or have boot failures I don't know what is wrong until I look. Some times if you are using jfs on root all that is needed is an fsck but it won't boot because something is buggered so init=/bin/bash doesn't work, so I just get the usb drive and plug and play. I do this so I can invoke the maximum damage to the system under abuse :)
On Wed, Aug 15, 2012 at 6:38 AM, Baho Utot <baho-utot@columbus.rr.com> wrote:
On 08/14/2012 08:53 PM, Oon-Ee Ng wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab?
It does. Though it does not use "mount -a", but rather mounts each fs separately.
[putolin]
I came across another anomaly on my systemd boxes that I would like someone to verify if they could. Please do this on a backup system.
I was changing some lvm partitions about that were mounted in /etc/fstab, actually I removed them and created two new lvm partitions with different names, but failed to update the fstab. Upon rebooting the systems failed to boot and where stuck at trying to mount the non existing lvm partitions. I could not fix the systems as I could not get a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting.
Could some brave enterprising soul confirm this?
This created the following question: Can systemd boot a system without a fstab?
you would have to provide the mountpoints -- depending on what you were mounting i'm quite sure initscripts would fail (/usr? /var? what was changed??), though they may very well just keep chugging on, pretending all is well. root mount depends on nothing more than what's listed on the kernel cmdline in grub.cfg or equivalent. you could have also added `break=y` (legacy form, i forget the new syntax) to open a shell in the initramfs and correct from there. AFAIK systemd doesn't NEED an fstab, but you would then need to provide native *.mount files instead ... SOMETHING has to tell it where the mounts go, yes? -- C Anthony
though they may very well just keep chugging on, pretending all is well.
Very last post on systemd as you've said this before and I chose not to respond. No, they will throw a decriptive or general error and do what the script author intended which could be sub routines, traps which could be ask the user anything and ^C may work too. You see this as a good thing? Was systemd intended to just stop without a prompt? -- _______________________________________________________________________ 'Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface' (Doug McIlroy) _______________________________________________________________________
On Wed, Aug 15, 2012 at 10:29 AM, Kevin Chadwick <ma1l1ists@yahoo.co.uk> wrote:
though they may very well just keep chugging on, pretending all is well.
Very last post on systemd as you've said this before and I chose not to respond.
No, they will throw a decriptive or general error and do what the script author intended which could be sub routines, traps which could be ask the user anything and ^C may work too. You see this as a good thing? Was systemd intended to just stop without a prompt?
i don't at all understand what you're trying to say/insinuate here? systemd will indeed drop to a prompt if the problem is critical (though i'm admittedly not 100% sure where that boundary lies, i think if `basic.target` isn't reached or something ...), but it has an rather long timeout that could easy lead a user into thinking it's "stuck" (something obnoxious like 5 minutes IIRC). at any rate, i'm pretty sure a failing mount only blocks boot if it's a system/api mountpoint ... i had a bad fstab at one point and i don't recall any serious issue, which is why the OP needs to provide more information. initscripts != sysvinit. there are no sufficiently unique requirements across distributions to warrant each one writing near 100% custom boot routines -- we all boot pretty much the same way. -- C Anthony
On 08/15/2012 11:01 AM, C Anthony Risinger wrote:
On 08/14/2012 08:53 PM, Oon-Ee Ng wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab? It does. Though it does not use "mount -a", but rather mounts each fs separately.
[putolin]
I came across another anomaly on my systemd boxes that I would like someone to verify if they could. Please do this on a backup system.
I was changing some lvm partitions about that were mounted in /etc/fstab, actually I removed them and created two new lvm partitions with different names, but failed to update the fstab. Upon rebooting the systems failed to boot and where stuck at trying to mount the non existing lvm partitions. I could not fix the systems as I could not get a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting.
Could some brave enterprising soul confirm this?
This created the following question: Can systemd boot a system without a fstab? you would have to provide the mountpoints -- depending on what you were mounting i'm quite sure initscripts would fail (/usr? /var? what was changed??), though they may very well just keep chugging on,
On Wed, Aug 15, 2012 at 6:38 AM, Baho Utot <baho-utot@columbus.rr.com> wrote: pretending all is well.
root mount depends on nothing more than what's listed on the kernel cmdline in grub.cfg or equivalent. you could have also added `break=y` (legacy form, i forget the new syntax) to open a shell in the initramfs and correct from there.
AFAIK systemd doesn't NEED an fstab, but you would then need to provide native *.mount files instead ... SOMETHING has to tell it where the mounts go, yes?
I don't know what your pointing out here What I had was /dev/lvm/lfs and /dev/lvm/LFS in the fstab. These where mounted into /mnt/lfs and /mnt/lfs/LFS I removed those from lvm and created /dev/lvm/wip and /dev/lvm/WIP and I did not remove the /dev/lvm/lfs and /dev/lvm/LFS from the fstab file, then rebooted. As far as I could tell systemd rolled over because it could not mount the lfs and LFS lvm partitions, because they where not there. It just hung waiting for mount points that just wasn't going to showup no matter what. I could not get a "maintenence prompt" it was just stuck at trying to mount the non-existent lvm partitions. My sysvinit systems simply spit out an error "can mount what ever blah blah blah" and continued to boot. Of course those points were not mounted by the system did boot fully. As for booting without an fstab I do that alot on my custom "rescue" usb thumb drives as they do not have a fstab file at all. I use not *.mount files at all and the system works just fine....the kernel knows where its root file system is. Try removing/moving the fstab from a test system. It will boot and run fine, of course you will lose swap and any other such things but if you have everything on one partition your good.
On Wed, Aug 15, 2012 at 12:19 PM, Baho Utot <baho-utot@columbus.rr.com> wrote:
On 08/15/2012 11:01 AM, C Anthony Risinger wrote:
On Wed, Aug 15, 2012 at 6:38 AM, Baho Utot <baho-utot@columbus.rr.com> wrote:
On 08/14/2012 08:53 PM, Oon-Ee Ng wrote:
On Wed, Aug 15, 2012 at 8:13 AM, Tom Gundersen <teg@jklm.no> wrote:
On Wed, Aug 15, 2012 at 1:55 AM, David Benfell <benfell@parts-unknown.org> wrote:
Does systemd not use the standard mount program and follow /etc/fstab?
It does. Though it does not use "mount -a", but rather mounts each fs separately.
[putolin]
I came across another anomaly on my systemd boxes that I would like someone to verify if they could. Please do this on a backup system.
I was changing some lvm partitions about that were mounted in /etc/fstab, actually I removed them and created two new lvm partitions with different names, but failed to update the fstab. Upon rebooting the systems failed to boot and where stuck at trying to mount the non existing lvm partitions. I could not fix the systems as I could not get a "recovery" bash prompt. I had to use a boot live CD to edit the fstab and then all was well. On all my sysvinit systems a bad mount point would just give me an error and continue booting.
Could some brave enterprising soul confirm this?
This created the following question: Can systemd boot a system without a fstab?
you would have to provide the mountpoints -- depending on what you were mounting i'm quite sure initscripts would fail (/usr? /var? what was changed??), though they may very well just keep chugging on, pretending all is well.
root mount depends on nothing more than what's listed on the kernel cmdline in grub.cfg or equivalent. you could have also added `break=y` (legacy form, i forget the new syntax) to open a shell in the initramfs and correct from there.
AFAIK systemd doesn't NEED an fstab, but you would then need to provide native *.mount files instead ... SOMETHING has to tell it where the mounts go, yes?
I don't know what your pointing out here
i asked you a question -- i don't know what i'd be pointing out either.
What I had was /dev/lvm/lfs and /dev/lvm/LFS in the fstab. These where mounted into /mnt/lfs and /mnt/lfs/LFS
/dev/lvm/? (/dev/mapper/?)
I removed those from lvm and created /dev/lvm/wip and /dev/lvm/WIP and I did not remove the /dev/lvm/lfs and /dev/lvm/LFS from the fstab file, then rebooted.
As far as I could tell systemd rolled over because it could not mount the lfs and LFS lvm partitions, because they where not there. It just hung waiting for mount points that just wasn't going to showup no matter what. I could not get a "maintenence prompt" it was just stuck at trying to mount the non-existent lvm partitions.
i said it would time out. i'm not 100% sure why that is considered a system critical mount, but there is nothing special about your experience, at the very least it's just Plain Old Bug, if at all, and providing there are not other details you've not realized and/or disclosed. just report/investigate man, and things get better for everyone.
My sysvinit systems simply spit out an error "can mount what ever blah blah blah" and continued to boot. Of course those points were not mounted by the system did boot fully.
ok? my guess is systemd would timeout and move on as well.
As for booting without an fstab I do that alot on my custom "rescue" usb thumb drives as they do not have a fstab file at all. I use not *.mount files at all and the system works just fine....the kernel knows where its root file system is. Try removing/moving the fstab from a test system. It will boot and run fine, of course you will lose swap and any other such things but if you have everything on one partition your good.
.... yeah i think i already said something to this effect (RE:grub.cfg), but `init=/bin/bash` is the painless way to fix in 30 seconds -- if your a badass (which i am, anyway ;-). -- C Anthony
participants (7)
-
Baho Utot
-
C Anthony Risinger
-
Christoph Vigano
-
Kevin Chadwick
-
Oon-Ee Ng
-
Shridhar Daithankar
-
Tom Gundersen