[arch-general] Installing to RAID .. cannot reboot
I have carefully followed the RAID instructions at: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation I have not used LVM, just RAID. I have double-checked what I actually did, and believe that I did exactly what the page tells me to do. I have two identical drives, which I configured as RAID1. At the conclusion of the installation, the instructions say: ---- Once it is complete you can safely reboot your machine: # reboot ---- When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0 What do I have to do to get past this error? Doc PS I tried to go back to the very beginning and walk through the instructions again, but when I do that, when I reach this step: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[cd]3 I am now informed that /dev/sdc3 is busy or unavailable. -- Web: http://www.sff.net/people/N7DR
D. R. Evans said the following at 06/20/2012 11:27 AM :
I have carefully followed the RAID instructions at: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation
I have not used LVM, just RAID. I have double-checked what I actually did, and believe that I did exactly what the page tells me to do.
I have two identical drives, which I configured as RAID1.
At the conclusion of the installation, the instructions say:
----
Once it is complete you can safely reboot your machine:
# reboot
----
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
What do I have to do to get past this error?
Doc
PS I tried to go back to the very beginning and walk through the instructions again, but when I do that, when I reach this step: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[cd]3 I am now informed that /dev/sdc3 is busy or unavailable.
Having spent a chunk of the day on this: 1. I figured out how to avoid the "busy or unavailable" message: one issues the "mdadm --stop" command. One can then go back to the beginning and re-create the RAID array(s). 2. So I meticulously started from scratch and, checking every command carefully, followed the instructions until I reached the point where they tell me to reboot. 3. On reboot, I am in exactly the same situation as before: the system can't boot because it cannot find /dev/md0. I therefore provisionally conclude that of the two possibilities: α) I am making a mistake in following the instructions β) The instructions contain a fatal error possibility β seems much the more likely. Most likely something is omitted, I think. I (obviously) have no way of knowing where the mistake lies, though, so I'm completely stuck until someone who understand how all this is supposed to work can look carefully at the instructions and perhaps spot the error. After some experimentation with Arch on other (non-RAID) computers, I had settled on Arch for my main desktop. But that's a RAID system and unless I can get Arch to install correctly I'm going to have go looking for a different distro -- which I really, really don't want to have to do, because there's a lot about Arch that I like :-( Doc -- Web: http://www.sff.net/people/N7DR
On 20/06/12 at 04:10pm, D. R. Evans wrote:
D. R. Evans said the following at 06/20/2012 11:27 AM :
I have carefully followed the RAID instructions at: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation
I have not used LVM, just RAID. I have double-checked what I actually did, and believe that I did exactly what the page tells me to do.
I have two identical drives, which I configured as RAID1.
At the conclusion of the installation, the instructions say:
----
Once it is complete you can safely reboot your machine:
# reboot
----
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
What do I have to do to get past this error?
Doc
PS I tried to go back to the very beginning and walk through the instructions again, but when I do that, when I reach this step: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[cd]3 I am now informed that /dev/sdc3 is busy or unavailable.
Having spent a chunk of the day on this:
1. I figured out how to avoid the "busy or unavailable" message: one issues the "mdadm --stop" command. One can then go back to the beginning and re-create the RAID array(s).
2. So I meticulously started from scratch and, checking every command carefully, followed the instructions until I reached the point where they tell me to reboot.
3. On reboot, I am in exactly the same situation as before: the system can't boot because it cannot find /dev/md0.
I therefore provisionally conclude that of the two possibilities: α) I am making a mistake in following the instructions β) The instructions contain a fatal error possibility β seems much the more likely. Most likely something is omitted, I think.
I encountered this same issue at the start of the year: it is simply a matter of completing the installation and chrooting in to install grub to both your devices: http://jasonwryan.com/blog/2012/02/11/lvm/ I had thought I added a note to the wiki to that effect… HTH /J
Jason Ryan said the following at 06/20/2012 04:54 PM :
I encountered this same issue at the start of the year: it is simply a matter of completing the installation and chrooting in to install grub to both your devices: http://jasonwryan.com/blog/2012/02/11/lvm/
The wiki seems strongly to urge one NOT to use grub, and the example given is for syslinux, so that's what I've been doing. Indeed. it says that the 2011.08.19 Arch Linux installer does not not support GRUB2. (I'm a bit confused about whether when "grub" is mentioned it means "grub2"; the wiki could be a lot clearer about that -- to be on the safe side, I avoided grub entirely and went with syslinux instead). 2011.08.19 is the what one gets if one downloads the current installer. According to the wiki, the step of copying the bootloader is supposed to occur after the reboot, and your page says that that should be possible. I did not realise it could be done before the reboot (since the wiki doesn't mention that possibility) so I have not tried that. I was planning to follow the new instruction placed into the wiki by Paul Dann and mentioned in his e-mail <2457829.5nCasJRlDv@leto> once I got past the reboot stage. So perhaps, having now discovered that the bootloader can be copied before the reboot, what I need to do is to go through the entire procedure again and this time copy the bootloader with /usr/sbin/syslinux-install_update -iam BEFORE the reboot. Incidentally, I have tried to boot off both the disks in the RAID1 array; they both fail with the same error (unable to find /dev/md0). Doc -- Web: http://www.sff.net/people/N7DR
# reboot
----
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
What do I have to do to get past this error?
Doc
PS I tried to go back to the very beginning and walk through the instructions again, but when I do that, when I reach this step: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[cd]3 I am now informed that /dev/sdc3 is busy or unavailable.
why are the sata devices sdc and sdd ? did you setup the mkinitcpio.conf as it says here: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Configure_system -- дамјан
Damjan said the following at 06/20/2012 04:21 PM :
# reboot
----
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
What do I have to do to get past this error?
Doc
PS I tried to go back to the very beginning and walk through the instructions again, but when I do that, when I reach this step: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[cd]3 I am now informed that /dev/sdc3 is busy or unavailable.
why are the sata devices sdc and sdd ?
I have a different OS on sda and sdb (which are also a RAID1 pair).
did you setup the mkinitcpio.conf as it says here: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#Configure_system
Yes. Doc -- Web: http://www.sff.net/people/N7DR
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf. When you get the above error message, are you dropped to a busybox shell? If so, can you do: # ls /dev/md* ...to see if the array is being started at all? This should be fixable without going through a full install each time. Just boot the install disk, insert md_mod, and cat /proc/mdstat to see if your RAID array is working. If it is, you can chroot into it in the normal way and fix whatever configuration issue this turns out to be. Paul
Paul Gideon Dann said the following at 06/21/2012 03:41 AM :
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf.
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue): MODULES="dm_mod" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck"
When you get the above error message, are you dropped to a busybox shell? If so, can you do:
# ls /dev/md*
...to see if the array is being started at all?
I am dropped to some sort of primitive shell (the prompt says "rootfs"). As you suspect, ls /dev/md* reports no such file or device. So I think we have a clue and it looks like you are right that the RAID is not being started, although I don't know why. A couple more facts that may provide useful information: 1. I also did an ls /etc And I see that there are only three entries: fstab, mtab and udev. I don't know if that's reasonable. I naïvely expected to see a populated /etc (since presumably /etc/mkinicpio.conf has been read at this point), but perhaps that expectation was incorrect. 2. I checked that the RAID will start correctly if I assemble the array from within a different OS, and it does so. Doc -- Web: http://www.sff.net/people/N7DR
On Thursday 21 Jun 2012 09:44:03 D. R. Evans wrote:
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck"
I have two RAID setups that work well for me. On both, I don't have anything in the MODULES line, and the following hooks: HOOKS="base udev autodetect pata scsi sata mdadm lvm2 filesystems" I hope this helps. I don't know anything about the mdadm_udev hook. Paul
On 21/06/12 at 06:13pm, Paul Gideon Dann wrote:
On Thursday 21 Jun 2012 09:44:03 D. R. Evans wrote:
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck"
I have two RAID setups that work well for me. On both, I don't have anything in the MODULES line, and the following hooks:
HOOKS="base udev autodetect pata scsi sata mdadm lvm2 filesystems"
I hope this helps. I don't know anything about the mdadm_udev hook.
“Assembly via udev is also possible using the mdadm_udev hook. Upstream prefers this method of assembly.” Arch Wiki /J -- http://jasonwryan.com/ [GnuPG Key: B1BD4E40]
D. R. Evans wrote:
Paul Gideon Dann said the following at 06/21/2012 03:41 AM :
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0 To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf.
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... Try adding "raid1" there...
Jerome -- mailto:jeberger@free.fr http://jeberger.free.fr Jabber: jeberger@jabber.fr
"Jérôme M. Berger" said the following at 06/21/2012 11:13 AM :
D. R. Evans wrote:
Paul Gideon Dann said the following at 06/21/2012 03:41 AM :
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0 To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf.
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... Try adding "raid1" there...
Extract of /etc/mkinicpio.conf now reads: MODULES="dm_mod raid1" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck" It made no difference, though :-( Doc -- Web: http://www.sff.net/people/N7DR
On Thu, Jun 21, 2012 at 9:21 PM, D. R. Evans <doc.evans@gmail.com> wrote:
"Jérôme M. Berger" said the following at 06/21/2012 11:13 AM :
D. R. Evans wrote:
Paul Gideon Dann said the following at 06/21/2012 03:41 AM :
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0 To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf.
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... Try adding "raid1" there...
Extract of /etc/mkinicpio.conf now reads: MODULES="dm_mod raid1" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck"
It made no difference, though :-(
Doc
-- Web: http://www.sff.net/people/N7DR
Sorry for the stupid question, but do you rebuild the init ram image after changing your mkinitcpio.conf ? --Chris Sakalis
Chris Sakalis said the following at 06/21/2012 01:15 PM :
On Thu, Jun 21, 2012 at 9:21 PM, D. R. Evans <doc.evans@gmail.com> wrote:
"Jérôme M. Berger" said the following at 06/21/2012 11:13 AM :
D. R. Evans wrote:
Paul Gideon Dann said the following at 06/21/2012 03:41 AM :
On Wednesday 20 Jun 2012 11:27:54 D. R. Evans wrote:
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0 To me, this sounds like the RAID array is being given the wrong name, or the mdadm hook isn't being added to /etc/mkinitcpio.conf.
Extract from /etc/mkinitcpio.conf (sorry about any possible wrapping issue):
MODULES="dm_mod" ... Try adding "raid1" there...
Extract of /etc/mkinicpio.conf now reads: MODULES="dm_mod raid1" ... HOOKS="base udev mdadm_udev lvm2 autodetect pata scsi sata filesystems usbinput fsck"
It made no difference, though :-(
Doc
-- Web: http://www.sff.net/people/N7DR
Sorry for the stupid question, but do you rebuild the init ram image after changing your mkinitcpio.conf ?
I don't understand the question, so it's not at all stupid. I'm just following the instructions on the wiki. The wiki says to edit mkinitcpio.conf immediately prior to the reboot, so that's what I'm doing. It doesn't mention any need to rebuild an image (nor how to do it). Doc -- Web: http://www.sff.net/people/N7DR
On Thu, Jun 21, 2012 at 10:39 PM, D. R. Evans <doc.evans@gmail.com> wrote:
Sorry for the stupid question, but do you rebuild the init ram image after changing your mkinitcpio.conf ?
I don't understand the question, so it's not at all stupid. I'm just following the instructions on the wiki. The wiki says to edit mkinitcpio.conf immediately prior to the reboot, so that's what I'm doing. It doesn't mention any need to rebuild an image (nor how to do it).
Doc
-- Web: http://www.sff.net/people/N7DR
On the wiki, it does not mention anything about rebuilding your initramfs, because mkinitcpio is called automatically by the installer after the configuration phase. However: mkinitcpio.conf is read by the mkinitcpio utility in order to create the initial ramdisk image. Essentially, it's a bunch of stuff needed by the kernel before the root filesystem is made available. After you edit the mkinitcpio.conf file, you HAVE to rebuild[1] your initramfs image, in order for the changes to actually take effect. If you just edit the file and then reboot, nothing is done. Also, I assume you are using the live CD for making your changes. Make sure that, when creating the images, you have chrooted[2] into your installation and you are not just creating them on the Live CD fs. --Chris Sakalis [1] - https://wiki.archlinux.org/index.php/Mkinitcpio#Image_creation_and_activatio... [2] - https://wiki.archlinux.org/index.php/Chroot
Chris Sakalis said the following at 06/21/2012 01:57 PM :
On Thu, Jun 21, 2012 at 10:39 PM, D. R. Evans <doc.evans@gmail.com> wrote:
Sorry for the stupid question, but do you rebuild the init ram image after changing your mkinitcpio.conf ?
I don't understand the question, so it's not at all stupid. I'm just following the instructions on the wiki. The wiki says to edit mkinitcpio.conf immediately prior to the reboot, so that's what I'm doing. It doesn't mention any need to rebuild an image (nor how to do it).
Doc
-- Web: http://www.sff.net/people/N7DR
On the wiki, it does not mention anything about rebuilding your initramfs, because mkinitcpio is called automatically by the installer after the configuration phase. However:
mkinitcpio.conf is read by the mkinitcpio utility in order to create the initial ramdisk image. Essentially, it's a bunch of stuff needed by the kernel before the root filesystem is made available. After you edit the mkinitcpio.conf file, you HAVE to rebuild[1] your initramfs image, in order for the changes to actually take effect. If you just edit the file and then reboot, nothing is done.
My turn to ask a stupid question, because I'm sure your explanation makes perfect sense to someone who understands this stuff, but I don't: my previous experience with installing Linux on RAID was with Ubuntu, for which I didn't need to mess with any of this. Are you saying that between these instructions: --- Add the dm_mod module to the MODULES list in /etc/mkinitcpio.conf. Add the mdadm_udev and lvm2 hooks to the HOOKS list in /etc/mkinitcpio.conf after udev. --- and the next step: --- Once it is complete you can safely reboot your machine: # reboot --- there should be some additional step(s)?
[1] - https://wiki.archlinux.org/index.php/Mkinitcpio#Image_creation_and_activatio...
That seems to say that I should run mkinitcpio, but that step isn't mentioned, even adumbratively, anywhere on https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation Hopelessly, completely, lost and beginning to despair.... Doc -- Web: http://www.sff.net/people/N7DR
On Thu, Jun 21, 2012 at 11:19 PM, D. R. Evans <doc.evans@gmail.com> wrote:
My turn to ask a stupid question, because I'm sure your explanation makes perfect sense to someone who understands this stuff, but I don't: my previous experience with installing Linux on RAID was with Ubuntu, for which I didn't need to mess with any of this.
Are you saying that between these instructions:
---
Add the dm_mod module to the MODULES list in /etc/mkinitcpio.conf. Add the mdadm_udev and lvm2 hooks to the HOOKS list in /etc/mkinitcpio.conf after udev.
---
and the next step:
---
Once it is complete you can safely reboot your machine:
# reboot
---
there should be some additional step(s)?
[1] - https://wiki.archlinux.org/index.php/Mkinitcpio#Image_creation_and_activatio...
That seems to say that I should run mkinitcpio, but that step isn't mentioned, even adumbratively, anywhere on https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation
Hopelessly, completely, lost and beginning to despair....
Doc
-- Web: http://www.sff.net/people/N7DR
I myself do not use RAID, so I may be totally wrong, but I pretty sure that you have to create a new initramfs image. Note that, the arch installer automatically generates the image after the configuration is done (the penultimate step of the installer - "Configure System"). I guess that's why it is not explicitly mentioned in the wiki. If you edit your mkinitcpio.conf *without* or *after* running this step, then you have to create a new image on your own. Otherwise, the installer takes care of it. Again, I do not use RAID, but given your problems and the lack of results from the solutions mentioned above, I think this is likely your problem. --Chris Sakalis
Chris Sakalis said the following at 06/21/2012 02:39 PM :
Note that, the arch installer automatically generates the image after the configuration is done (the penultimate step of the installer - "Configure System"). I guess that's why it is not explicitly mentioned in the wiki.
If you edit your mkinitcpio.conf *without* or *after* running this step, then you have to create a new image on your own. Otherwise, the installer takes care of it.
OK; I understand. Thank you. I'll go back and make sure that this is happening properly when I have the stomach for it (not today!). Doc -- Web: http://www.sff.net/people/N7DR
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 06/21/2012 03:57 PM, D. R. Evans wrote:
Chris Sakalis said the following at 06/21/2012 02:39 PM :
Note that, the arch installer automatically generates the image after the configuration is done (the penultimate step of the installer - "Configure System"). I guess that's why it is not explicitly mentioned in the wiki.
If you edit your mkinitcpio.conf *without* or *after* running this step, then you have to create a new image on your own. Otherwise, the installer takes care of it.
OK; I understand. Thank you.
I'll go back and make sure that this is happening properly when I have the stomach for it (not today!).
Doc
D. R., I just stumbled across the information in the mkinitcpio wiki page working a 'dm'raid boot failure. Take a look at: https://wiki.archlinux.org/index.php/Mkinitcpio Under: Using RAID First, add the mdadm hook to the HOOKS array and any required RAID modules to the MODULES array in /etc/mkinitcpio.conf. Kernel Parameters: Using the mdadm hook, you no longer need to configure your RAID array in the GRUB parameters. The mdadm hook will either use your /etc/mdadm.conf file or automatically detect the array(s) during the init phase of boot. Assembly via udev is also possible using the mdadm_udev hook. Upstream prefers this method of assembly. /etc/mdadm.conf will still be read for purposes of naming the assembled devices if it exists. HTH, sorry if you already had the info. - -- David C. Rankin, J.D.,P.E. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk/jp4YACgkQZMpuZ8Cyrcjm1gCeJ94T080BMHcSSvIKeQ7N3T7q JqgAnA60J66oVT447ltlaqcGd3Eknsqa =MiKa -----END PGP SIGNATURE-----
D. R. Evans said the following at 06/20/2012 11:27 AM :
I have carefully followed the RAID instructions at: https://wiki.archlinux.org/index.php/Software_RAID_and_LVM#RAID_installation
I have not used LVM, just RAID. I have double-checked what I actually did, and believe that I did exactly what the page tells me to do.
I have two identical drives, which I configured as RAID1.
At the conclusion of the installation, the instructions say:
----
Once it is complete you can safely reboot your machine:
# reboot
----
When I try to reboot, I receive the error message: ERROR: device /dev/md0 not found ERROR: unable to find root device /dev/md0
What do I have to do to get past this error?
I have finally reached the point where the various /dev/md<n> devices mount during the reboot.... Now I get a large number of error messages of the form: init: failed to create pty - disabling logging for job and: could not load /lib/modules/3.4.4-2-ARCH/modules.dep no such file or directory on the console, and then the system appears to simply halt, without completing the reboot. What are these messages trying to tell me, and how do I fix them so that the reboot can complete? Doc -- Web: http://www.sff.net/people/N7DR
On 07/02/2012 02:42 PM, D. R. Evans wrote:
I have finally reached the point where the various /dev/md<n> devices mount during the reboot....
Now I get a large number of error messages of the form: init: failed to create pty - disabling logging for job and: could not load /lib/modules/3.4.4-2-ARCH/modules.dep no such file or directory on the console, and then the system appears to simply halt, without completing the reboot.
What are these messages trying to tell me, and how do I fix them so that the reboot can complete?
Doc, It will take one of the smarter devs to tell you exactly what the error is telling you, but it looks to me like your initramfs got borked/corrupt/whatever and for some reason it can't determine what modules to load because whatever hook or link to the modules.dep file is either not there or broken. In my case when this happened with 3.4.4-2 and dmraid, I had to boot with the 2010.05 install disk due to the latest not having dmraid hooks, then I had to manually assemble the dmraid (not your 'md'raid) arrays and manually remake the initramfs with 'mkinitcpio -p linux'. This fixed my initramfs corruption. (mine wouldn't even boot to the modules.dep point) After remaking the image -- all my problems went away. I don't know why I experienced this issue for the first time with 3.4.4-2, but I suspect it has to do with 'filesystem' and not the kernel. I have seen some weird issues both with dmraid and with archroot in the past couple of weeks. Like I said, a smarter guy will have to help with the exact message, but hopefully a remake of the initramfs will straighten your modules.dep problem out. -- David C. Rankin, J.D.,P.E.
On Monday 02 Jul 2012 13:42:39 D. R. Evans wrote:
I have finally reached the point where the various /dev/md<n> devices mount during the reboot....
Now I get a large number of error messages of the form: init: failed to create pty - disabling logging for job and: could not load /lib/modules/3.4.4-2-ARCH/modules.dep no such file or directory on the console, and then the system appears to simply halt, without completing the reboot.
What are these messages trying to tell me, and how do I fix them so that the reboot can complete?
When you say "reboot", you're saying that these messages appear as ArchLinux is trying to boot (after grub), right? Not as it's shutting down? Could you list the messages that appear before the error messages, so that we can get an idea of what stage you've got to in the boot process? Like David, I also think this smells like an initramfs issue, and chances are that if you can chroot into your system somehow and run "mkinitcpio -p linux", it might help. Paul
Paul Gideon Dann said the following at 07/03/2012 02:35 AM :
On Monday 02 Jul 2012 13:42:39 D. R. Evans wrote:
I have finally reached the point where the various /dev/md<n> devices mount during the reboot....
Now I get a large number of error messages of the form: init: failed to create pty - disabling logging for job and: could not load /lib/modules/3.4.4-2-ARCH/modules.dep no such file or directory on the console, and then the system appears to simply halt, without completing the reboot.
What are these messages trying to tell me, and how do I fix them so that the reboot can complete?
When you say "reboot", you're saying that these messages appear as ArchLinux is trying to boot (after grub), right? Not as it's shutting down?
Yes... although it's not GRUB, since the wiki says that GRUB is not supported with RAID. So it's SYSLINUX.
Could you list the messages that appear before the error messages, so that we can get an idea of what stage you've got to in the boot process?
That's non-trivial... there's no way to capture them, so I would have to hope that XON/XOFF works (which it probably does) and write them down by hand. I'll do that when I can summon up the enthusiasm for it. Frankly, after more than a week trying to install Arch on RAID, I'm awfully close to giving up and going back to Ubuntu, which Just Worked, at least insofar as the installation was concerned. In the meantime, as far as I recall, the messages said that the various RAID arrays were up and running, and then immediately I started getting a series of the "failed to create pty" messages. But it's quite likely that I've forgotten one or two informative messages that appeared before the errors started to occur. Doc -- Web: http://www.sff.net/people/N7DR
2012/7/3 D. R. Evans <doc.evans@gmail.com>:
Paul Gideon Dann said the following at 07/03/2012 02:35 AM :
On Monday 02 Jul 2012 13:42:39 D. R. Evans wrote:
I have finally reached the point where the various /dev/md<n> devices mount during the reboot....
Now I get a large number of error messages of the form: init: failed to create pty - disabling logging for job and: could not load /lib/modules/3.4.4-2-ARCH/modules.dep no such file or directory on the console, and then the system appears to simply halt, without completing the reboot.
[...]
Could you list the messages that appear before the error messages, so that we can get an idea of what stage you've got to in the boot process?
That's non-trivial... there's no way to capture them, so I would have to hope that XON/XOFF works (which it probably does) and write them down by hand. I'll do that when I can summon up the enthusiasm for it. Frankly, after more than a week trying to install Arch on RAID, I'm awfully close to giving up and going back to Ubuntu, which Just Worked, at least insofar as the installation was concerned.
In the meantime, as far as I recall, the messages said that the various RAID arrays were up and running, and then immediately I started getting a series of the "failed to create pty" messages. But it's quite likely that I've forgotten one or two informative messages that appeared before the errors started to occur.
Hello Doc, the messages sound a bit like a read-only root filesystem. I understand you've been going through a lot of work, but it seems like there has been so much done on this system, that it's very hard to troubleshoot now. One thing that isn't entirely clear to me (anymore); was this on a fresh install, or were you converting an existing Archlinux-install to a Raid configuration? In the case of fresh install; would it be an option to start over from scratch? I ask this, because then it would be a lot easier to list all the neccesary steps one by one and hopefully get it running as wished in the first place. I'm willing to dig up some spare HDD's to try and get Arch installed on a RAID1 array, right from the start, but only if that is what you're looking for. mvg, Guus
Guus Snijders said the following at 07/03/2012 01:57 PM :
In the case of fresh install; would it be an option to start over from scratch?
This was a fresh install. My original problem (lack of /dev/md<n>) was due to an ambiguity in the English in the various wiki pages. Once I got past that, I proceeded right through the entire installation process, start-to-finish, with no obvious error. And then I performed the #reboot, which is sort-of "do this to make sure it all worked" step at the very end. I didn't for a moment think there would be a problem at that stage. But there was. I am wondering if the problem is something crazy like the fact that the RAID pair I'm using is on sdc and sdd, and something somewhere is assuming that I'm using sda and/or sdb (which actually are a RAID1 pair with Kubuntu on them). I'm thinking about simply pulling the plug on sda and sdb, and performing a completely fresh install again so that there's no chance whatsoever that at any point in the process can it go to sda or sdb by mistake.
I ask this, because then it would be a lot easier to list all the neccesary steps one by one and hopefully get it running as wished in the first place. I'm willing to dig up some spare HDD's to try and get Arch installed on a RAID1 array, right from the start, but only if that is what you're looking for.
That is indeed what I'm looking for. Let me try one last time; I don't want to drag any one else into the morass that I'm currently experiencing unless it's absolutely necessary. I just need to get my enthusiasm up to the point where I'm willing to spend another hour or two trying it again. Doc -- Web: http://www.sff.net/people/N7DR
2012/7/3 D. R. Evans <doc.evans@gmail.com>:
Guus Snijders said the following at 07/03/2012 01:57 PM :
In the case of fresh install; would it be an option to start over from scratch?
This was a fresh install. [...] I am wondering if the problem is something crazy like the fact that the RAID pair I'm using is on sdc and sdd, and something somewhere is assuming that I'm using sda and/or sdb (which actually are a RAID1 pair with Kubuntu on them).
I'm thinking about simply pulling the plug on sda and sdb, and performing a completely fresh install again so that there's no chance whatsoever that at any point in the process can it go to sda or sdb by mistake.
Hmm, i thought that /etc/mdadm.conf should take care of that.
I ask this, because then it would be a lot easier to list all the neccesary steps one by one and hopefully get it running as wished in the first place. I'm willing to dig up some spare HDD's to try and get Arch installed on a RAID1 array, right from the start, but only if that is what you're looking for.
That is indeed what I'm looking for.
Let me try one last time; I don't want to drag any one else into the morass that I'm currently experiencing unless it's absolutely necessary. I just need to get my enthusiasm up to the point where I'm willing to spend another hour or two trying it again.
Ok. I guess you'd best contact me off-list if we are going side-by-side. If/when we find something wrong in the wiki we could report back here. mvg, Guus
participants (8)
-
"Jérôme M. Berger"
-
Chris Sakalis
-
D. R. Evans
-
Damjan
-
David C. Rankin
-
Guus Snijders
-
Jason Ryan
-
Paul Gideon Dann