[arch-general] dmraid-1.0.0rc15 in testing, need you help on this!
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this. Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support. greetings tpowa -- Tobias Powalowski Archlinux Developer & Package Maintainer (tpowa) http://www.archlinux.org tpowa@archlinux.org
On Fri, Jun 19, 2009 at 4:17 PM, Tobias Powalowski<t.powa@gmx.de> wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
Ping. Does anyone use dmraid?
On Friday 19 June 2009 08:15:58 pm Aaron Griffin wrote:
On Fri, Jun 19, 2009 at 4:17 PM, Tobias Powalowski<t.powa@gmx.de> wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
Ping. Does anyone use dmraid?
Yes, And I need this updated package. Hopefully it now supports the -R --rebuild option. I have an x86_64 box to test it on. -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
On Friday 19 June 2009 04:17:45 pm Tobias Powalowski wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
greetings tpowa
Tobias, I have installed 1.0.0rc15 but I haven't rebooted yet. Are you saying I need to put the 'p' here: # (0) Arch Linux title Arch Linux root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro vga=0x31a initrd /kernel26.img # (1) Arch Linux title Arch Linux Fallback root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro initrd /kernel26-fallback.img # (2) SuSE Linux 11.0 title openSuSE Linux 11.0 rootnoverify (hd0,4) makeactive # (3) memtest86+ title Memtest86+ [/memtest86+/memtest.bin] kernel (hd1,5)/memtest86+/memtest.bin AND in fstab /dev/mapper/nvidia_ecaejfdi<p>5 / ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>6 /boot ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>7 /home ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>8 swap swap defaults 0 0 -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
On Friday 19 June 2009 04:17:45 pm Tobias Powalowski wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
greetings tpowa
Tobias,
I have installed 1.0.0rc15 but I haven't rebooted yet. Are you saying I need to put the 'p' here:
# (0) Arch Linux title Arch Linux root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro vga=0x31a initrd /kernel26.img
# (1) Arch Linux title Arch Linux Fallback root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro initrd /kernel26-fallback.img
# (2) SuSE Linux 11.0 title openSuSE Linux 11.0 rootnoverify (hd0,4) makeactive
# (3) memtest86+ title Memtest86+ [/memtest86+/memtest.bin] kernel (hd1,5)/memtest86+/memtest.bin
AND in fstab
/dev/mapper/nvidia_ecaejfdi<p>5 / ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>6 /boot ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>7 /home ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>8 swap swap defaults 0 0 Yes this might happen, last time i bumped to this version, people reported
Am Sonntag 21 Juni 2009 schrieb David C. Rankin: this. I'm not sure if everyone needs it though, i need feedback on this. greetgins tpowa -- Tobias Powalowski Archlinux Developer & Package Maintainer (tpowa) http://www.archlinux.org tpowa@archlinux.org
On Sunday 21 June 2009 02:07:53 am Tobias Powalowski wrote:
Am Sonntag 21 Juni 2009 schrieb David C. Rankin:
On Friday 19 June 2009 04:17:45 pm Tobias Powalowski wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
greetings tpowa
Tobias,
I have installed 1.0.0rc15 but I haven't rebooted yet. Are you saying I need to put the 'p' here:
# (0) Arch Linux title Arch Linux root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro vga=0x31a initrd /kernel26.img
# (1) Arch Linux title Arch Linux Fallback root (hd1,5) kernel /vmlinuz26 root=/dev/mapper/nvidia_ecaejfdi<p>5 ro initrd /kernel26-fallback.img
# (2) SuSE Linux 11.0 title openSuSE Linux 11.0 rootnoverify (hd0,4) makeactive
# (3) memtest86+ title Memtest86+ [/memtest86+/memtest.bin] kernel (hd1,5)/memtest86+/memtest.bin
AND in fstab
/dev/mapper/nvidia_ecaejfdi<p>5 / ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>6 /boot ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>7 /home ext3 defaults 0 1 /dev/mapper/nvidia_ecaejfdi<p>8 swap swap defaults 0 0
Yes this might happen, last time i bumped to this version, people reported this. I'm not sure if everyone needs it though, i need feedback on this. greetgins tpowa
Tobias: Yes, it DID happen. The post-install warning was correct in my case. I modified both /boot/grub/menu.lst and /etc/fstab to include the atavistic 'p' before the number within the device mapper label and Arch booted just fine (of course I got the black bean on having fsck check part three which was an enjoyable break while it completed. Everything seems to be humming away just as it was before except for the new 'p'. 01:48 archangel:~> sudo dmraid -r /dev/sdd: nvidia, "nvidia_ecaejfdi", mirror, ok, 1465149166 sectors, data@ 0 /dev/sdc: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_ecaejfdi", mirror, ok, 1465149166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0 [01:52 archangel:/home/david] # dmraid -s -s *** Active Set name : nvidia_ecaejfdi size : 1465149056 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 *** Active Set name : nvidia_fdaacfde size : 976773120 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 One Question: "Why is the 'p' necessary?" It breaks all the scripts I had that deal with mounting and unmounting the arrays for the SuSE install when I am running Arch. SuSE is on a separate array on the same machine. (SuSE is on a dm array created out of sda & sdc and Arch is on an array created on sdb & sdd) Now when I boot into Arch and need to mount/unmount the arrays from the SuSE install, they fail, because the scripts have no p... Arch now boots like this: 01:18 archangel:~> ls -1 /dev/mapper control nvidia_ecaejfdi nvidia_ecaejfdip5 nvidia_ecaejfdip6 nvidia_ecaejfdip7 nvidia_ecaejfdip8 nvidia_fdaacfde nvidia_fdaacfdep5 nvidia_fdaacfdep6 nvidia_fdaacfdep7 nvidia_fdaacfdep8 At least when SuSE boots it still has my old dm labels so the scripts on SuSE still work to manipulate mounting and unmount of the arrays the inactive Arch is on: 01:21 ecstasy:~> ls -1 /dev/mapper control nvidia_ecaejfdi nvidia_ecaejfdi_part1 nvidia_ecaejfdi_part5 nvidia_ecaejfdi_part6 nvidia_ecaejfdi_part7 nvidia_ecaejfdi_part8 nvidia_fdaacfde nvidia_fdaacfde_part1 nvidia_fdaacfde_part5 nvidia_fdaacfde_part6 nvidia_fdaacfde_part7 nvidia_fdaacfde_part8 So I guess I will have to edit all the scripts to put an extra 'p' in them as well. (Really kind of a pain, but if this new scheme is going to be the standard, I can do it, ...but if this is just a 'proposed' change, then I would rather not. So is this Official? If so I don't mind changing them around and I know SuSE will eventually catch up. I've cc'ed Heinz Mauelshagen (the dev at Redhat to see if he thinks this is permanent. This does present a problem for Linux dual boot boxes. What was the reason to the change anyway? Tobias, let me know if you want me to run any more test. (I don't have anything to rebuild right now or I would try the -R command which is an EXCELLENT addition to dmraid. Kuddos to the brainchild behind that. -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
So I guess I will have to edit all the scripts to put an extra 'p' in them as well. (Really kind of a pain, but if this new scheme is going to be the standard, I can do it, ...but if this is just a 'proposed' change, then I would rather not.
So is this Official? If so I don't mind changing them around and I know SuSE will eventually catch up. I've cc'ed Heinz Mauelshagen (the dev at Redhat to see if he thinks this is permanent. This does present a problem for Linux dual boot boxes. What was the reason to the change anyway?
Tobias, let me know if you want me to run any more test. (I don't have anything to rebuild right now or I would try the -R command which is an EXCELLENT addition to dmraid. Kuddos to the brainchild behind that. Hi, since i didn't have changed the source code of dmraid i think it is an official change, i guess it follows the kernel name scheme here.
I try to add some more dmraid specific stuff to archboot setup routine. Reading through this wiki: http://wiki.archlinux.org/index.php/Installing_with_Fake-RAID - Is it really enough to call dmraid -ay after you partitioned your raidset? Isn't this updated automatically? - There are some mentions that you need to dmsetup remove_all before, which I think is not correct. - Installing grub do i need this C H S hack? Is grub failing if i just install it on (hd0)? - How about lilo? Is lilo working without any modification? Sorry for asking so many questions, but without the hardware some things are really difficult to guess. Thanks for your help on this. greetings tpowa -- Tobias Powalowski Archlinux Developer & Package Maintainer (tpowa) http://www.archlinux.org tpowa@archlinux.org
On Monday 22 June 2009 02:14:15 am Tobias Powalowski wrote:
So I guess I will have to edit all the scripts to put an extra 'p' in them as well. (Really kind of a pain, but if this new scheme is going to be the standard, I can do it, ...but if this is just a 'proposed' change, then I would rather not.
So is this Official? If so I don't mind changing them around and I know SuSE will eventually catch up. I've cc'ed Heinz Mauelshagen (the dev at Redhat to see if he thinks this is permanent. This does present a problem for Linux dual boot boxes. What was the reason to the change anyway?
Tobias, let me know if you want me to run any more test. (I don't have anything to rebuild right now or I would try the -R command which is an EXCELLENT addition to dmraid. Kuddos to the brainchild behind that.
Hi, since i didn't have changed the source code of dmraid i think it is an official change, i guess it follows the kernel name scheme here.
I try to add some more dmraid specific stuff to archboot setup routine. Reading through this wiki: http://wiki.archlinux.org/index.php/Installing_with_Fake-RAID
- Is it really enough to call dmraid -ay after you partitioned your raidset? Isn't this updated automatically?
I'm no dmraid expert, but I can tell you that any time you remove and recreate your dmpartitions, you will get a NEW set of partition labels. If you look at the wiki, you will see my old Arch partitions on this same machine were: nvidia_fffadgic nvidia_fffadgic5 nvidia_fffadgic6 nvidia_fffadgic7 nvidia_fffadgic8 After a drive failure and no -R (rebuild) option yet (in the community release), I had to delete my existing array, partition my new replacement disk to match the good disk, and then use 'dd' to copy all the partitions from 'good drive' -> 'new drive' and then add both disks to a new dmraid in the bios. When I did that, I ended up with: nvidia_ecaejfdi nvidia_ecaejfdi5 nvidia_ecaejfdi6 nvidia_ecaejfdi7 nvidia_ecaejfdi8 The naming was entirely automatic. I don't know if it was handled in the bios or in Arch, but I do know, I didn't to it. Now with the new dmraid 1.0.15rc, I have and the new p, I have: nvidia_ecaejfdi nvidia_ecaejfdip5 nvidia_ecaejfdip6 nvidia_ecaejfdip7 nvidia_ecaejfdip8
- There are some mentions that you need to dmsetup remove_all before, which I think is not correct.
I agree with you, no need to even tough dmsetup. All I did was add the 'p's in ../grub/menu.lst and change fstab like the warning said and it is all OK.
- Installing grub do i need this C H S hack? Is grub failing if i just install it on (hd0)?
I don't think it would (maybe) My /boot/grub device map seems to handle that correlation: [02:47 archangel:~] # cat /boot/grub/device.map (hd0) /dev/mapper/nvidia_fdaacfde (hd1) /dev/mapper/nvidia_ecaejfdi (fd0) /dev/fd0 It is just mapping the labels to (hd0), etc. so nothing jumps out at me saying you can't
- How about lilo? Is lilo working without any modification?
I don't think I have worked with lilo since Mandrake 7.4 (air) release on an old 386 machine from floppy discs (Maybe it was on a CD)
Sorry for asking so many questions, but without the hardware some things are really difficult to guess. Thanks for your help on this. greetings tpowa
Thank you for you help. I'm just really glad to have the --rebuild capability now. I really like dmraid and I have it on 5-6 setups - never a single loss of data in the past 4-5 years since I have been using it. Great work! -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
I need to explain this a bit more: If you start with empty disks, eg. 2 disks I put them together in bios of raid controller. After booting install media, which runs dmraid -ay during boot process, i get something like that: /dev/mapper/nvidia_fffadgic because no partitions are there. Now i cfdisk this device, are the nodes updated then? - I mean do i get the /dev/mapper/vidia_fffadgicp1 p2 p3 autimatically or do I need to run dmraid -ay for every partition i created? - Also what happens if partitions are deleted? Thanks for some enlightment. greetings tpowa -- Tobias Powalowski Archlinux Developer & Package Maintainer (tpowa) http://www.archlinux.org tpowa@archlinux.org
Just forgot one question, is /dev/sda also shown in /dev tree if dmraid is used? thanks greetings tpowa -- Tobias Powalowski Archlinux Developer & Package Maintainer (tpowa) http://www.archlinux.org tpowa@archlinux.org
On Mon, 2009-06-22 at 10:16 +0200, Tobias Powalowski wrote:
Just forgot one question, is /dev/sda also shown in /dev tree if dmraid is used? thanks greetings tpowa
That looks logical to me. dmraid is just RAID using the devicemapper/lvm subsystem. When I use LVM on a machine, I will get devicenodes for /dev/sda and such. dmraid can't work without devicenodes for these devices, as it has to be managed by userspace tools.
On Monday 22 June 2009 03:16:15 am Tobias Powalowski wrote:
Just forgot one question, is /dev/sda also shown in /dev tree if dmraid is used? thanks greetings tpowa
They are all there: [09:53 archangel:~] # l /dev/sd* brw-rw---- 1 root disk 8, 0 2009-06-22 01:40 /dev/sda brw-rw---- 1 root disk 8, 1 2009-06-22 01:40 /dev/sda1 brw-rw---- 1 root disk 8, 5 2009-06-22 01:40 /dev/sda5 brw-rw---- 1 root disk 8, 6 2009-06-22 01:40 /dev/sda6 brw-rw---- 1 root disk 8, 7 2009-06-22 01:40 /dev/sda7 brw-rw---- 1 root disk 8, 8 2009-06-22 01:40 /dev/sda8 brw-rw---- 1 root disk 8, 16 2009-06-22 01:40 /dev/sdb brw-rw---- 1 root disk 8, 17 2009-06-22 01:40 /dev/sdb1 brw-rw---- 1 root disk 8, 21 2009-06-22 01:40 /dev/sdb5 brw-rw---- 1 root disk 8, 22 2009-06-22 01:40 /dev/sdb6 brw-rw---- 1 root disk 8, 23 2009-06-22 01:40 /dev/sdb7 brw-rw---- 1 root disk 8, 24 2009-06-22 01:40 /dev/sdb8 brw-rw---- 1 root disk 8, 32 2009-06-22 01:40 /dev/sdc brw-rw---- 1 root disk 8, 33 2009-06-22 01:40 /dev/sdc1 brw-rw---- 1 root disk 8, 37 2009-06-22 01:40 /dev/sdc5 brw-rw---- 1 root disk 8, 38 2009-06-22 01:40 /dev/sdc6 brw-rw---- 1 root disk 8, 39 2009-06-22 01:40 /dev/sdc7 brw-rw---- 1 root disk 8, 40 2009-06-22 01:40 /dev/sdc8 brw-rw---- 1 root disk 8, 48 2009-06-22 01:40 /dev/sdd brw-rw---- 1 root disk 8, 49 2009-06-22 01:40 /dev/sdd1 brw-rw---- 1 root disk 8, 53 2009-06-22 01:40 /dev/sdd5 brw-rw---- 1 root disk 8, 54 2009-06-22 01:40 /dev/sdd6 brw-rw---- 1 root disk 8, 55 2009-06-22 01:40 /dev/sdd7 brw-rw---- 1 root disk 8, 56 2009-06-22 01:40 /dev/sdd8 -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
On Monday 22 June 2009 03:13:41 am Tobias Powalowski wrote:
I need to explain this a bit more:
If you start with empty disks, eg. 2 disks I put them together in bios of raid controller. After booting install media, which runs dmraid -ay during boot process, i get something like that: /dev/mapper/nvidia_fffadgic because no partitions are there.
Tobias, that's correct. I put them together in the bios, and then I just partition them as normal when the Arch installer ask me what partitions I want.
Now i cfdisk this device, are the nodes updated then? - I mean do i get the /dev/mapper/vidia_fffadgicp1 p2 p3 autimatically or do I need to run dmraid -ay for every partition i created?
That - I don't know. I know that all the nodes were updated during the 1st boot after upgrading to 1.0.15rc, but I haven't looked since. Next time I'm at the machine, I'll boot and see. I think you will need to run dmraid -y each time, but I will confirm.
- Also what happens if partitions are deleted?
I haven't tried that yet. Remarkably, cfdisk doesn't show the 'p' anywhere in the partition listing. So it looks like the 'p' is strictly a creature of dmraid: cfdisk (util-linux-ng 2.14.2) Disk Drive: /dev/mapper/nvidia_ecaejfdi Size: 750156372992 bytes, 750.1 GB Heads: 255 Sectors per Track: 63 Cylinders: 91201 Name Flags Part Type FS Type [Label] Size (MB) ---------------------------------------------------------------------------------------------------------- nvidia_ecaejfdi5Boot Logical Linux ext3 20003.85 * nvidia_ecaejfdi6 Logical Linux ext3 123.38 nvidia_ecaejfdi7 Logical Linux ext3 39999.54 nvidia_ecaejfdi8 Logical Linux swap / Solaris 1998.75 Pri/Log Free Space 688028.23
Thanks for some enlightment. greetings tpowa
-- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
On Monday 22 June 2009 03:13:41 am Tobias Powalowski wrote:
I need to explain this a bit more:
<snip>
Now i cfdisk this device, are the nodes updated then? - I mean do i get the /dev/mapper/vidia_fffadgicp1 p2 p3 autimatically or do I need to run dmraid -ay for every partition i created? - Also what happens if partitions are deleted?
ADDING NEW PARTITION (PART9) - 10G IN SIZE: cfdisk (util-linux-ng 2.14.2) Disk Drive: /dev/mapper/nvidia_ecaejfdi Size: 750156372992 bytes, 750.1 GB Heads: 255 Sectors per Track: 63 Cylinders: 91201 Name Flags Part Type FS Type [Label] Size (MB) ---------------------------------------------------------------------------------------------------------- nvidia_ecaejfdi5Boot Logical Linux ext3 20003.85 * nvidia_ecaejfdi6 Logical Linux ext3 123.38 nvidia_ecaejfdi7 Logical Linux ext3 39999.54 nvidia_ecaejfdi8 Logical Linux swap / Solaris 1998.75 nvidia_ecaejfdi9 Logical Linux 10001.95 Pri/Log Free Space 678026.29 (write & quit) New node NOT created: [13:39 archangel:~] # l /dev/mapper total 0 drwxr-xr-x 2 root root 0 2009-06-22 01:40 . drwxr-xr-x 23 root root 0 2009-06-22 01:41 .. crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5 brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6 brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7 brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8 <snip> [13:44 archangel:~] # dmraid -ay nvidia_ecaejfdi RAID set "nvidia_ecaejfdi" already active RAID set "nvidia_ecaejfdip5" already active RAID set "nvidia_ecaejfdip6" already active RAID set "nvidia_ecaejfdip7" already active RAID set "nvidia_ecaejfdip8" already active RAID set "nvidia_ecaejfdip9" was activated [13:44 archangel:~] # l /dev/mapper total 0 drwxr-xr-x 2 root root 0 2009-06-22 13:44 . drwxr-xr-x 23 root root 0 2009-06-22 13:44 .. crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control brw------- 1 root disk 254, 0 2009-06-22 13:38 nvidia_ecaejfdi brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5 brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6 brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7 brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8 brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9 <snip> DELETING PARTITION (PART 9) - 10G IN SIZE cfdisk (util-linux-ng 2.14.2) Disk Drive: /dev/mapper/nvidia_ecaejfdi Size: 750156372992 bytes, 750.1 GB Heads: 255 Sectors per Track: 63 Cylinders: 91201 Name Flags Part Type FS Type [Label] Size (MB) ---------------------------------------------------------------------------------------------------------- nvidia_ecaejfdi5Boot Logical Linux ext3 20003.85 * nvidia_ecaejfdi6 Logical Linux ext3 123.38 nvidia_ecaejfdi7 Logical Linux ext3 39999.54 nvidia_ecaejfdi8 Logical Linux swap / Solaris 1998.75 Pri/Log Free Space 688028.23 (write & quit) Partition NOT removed from /dev/mapper: [13:48 archangel:~] # l /dev/mapper total 0 drwxr-xr-x 2 root root 0 2009-06-22 13:44 . drwxr-xr-x 23 root root 0 2009-06-22 13:44 .. crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5 brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6 brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7 brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8 brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9 <snip> [13:59 archangel:~] # dmraid -an nvidia_ecaejfdi [13:59 archangel:~] # l /dev/mapper total 0 drwxr-xr-x 2 root root 0 2009-06-22 13:44 . drwxr-xr-x 23 root root 0 2009-06-22 13:44 .. crw-rw---- 1 root root 10, 60 2009-06-22 01:40 control brw------- 1 root disk 254, 0 2009-06-22 13:48 nvidia_ecaejfdi brw------- 1 root disk 254, 2 2009-06-22 01:40 nvidia_ecaejfdip5 brw------- 1 root disk 254, 3 2009-06-22 01:40 nvidia_ecaejfdip6 brw------- 1 root disk 254, 4 2009-06-22 01:40 nvidia_ecaejfdip7 brw------- 1 root disk 254, 5 2009-06-22 01:40 nvidia_ecaejfdip8 brw------- 1 root disk 254, 10 2009-06-22 13:44 nvidia_ecaejfdip9 <snip> Huh??? Why wasn't nvidia_ecaejfdip9 deactivated. I have deleted the partition in cfdisk, tried to activate it (y) and deactivate it (n) and still it is there. Is this a bug? Or do I need to erase the metadata in some other way?? -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com
Works fine here; p's are required for me. Bye urs On Friday 19 June 2009 23:17:45 Tobias Powalowski wrote:
Hi guys, I put again a 1.0.0rc15 package to testing, i can't test it myself so i need you to help me on this.
Is it true that name scheme changed from: name1 to namep1 for partitions? And how are your arrays named? Are your arrays assembled correct? Does everything work as before? Thanks for helping me on this, it will improve our dmraid support.
greetings tpowa
participants (5)
-
Aaron Griffin
-
David C. Rankin
-
Jan de Groot
-
Tobias Powalowski
-
Urs Wolfer