[arch-releng] New install iso, when?
I was wondering when there would be a new install iso and if there is anything I can help out with to get it out the door. I would be willing to help with mostly anything except aif, since I am working on a custom install script called inky which aims to be much simpler then aif. more details about inky at - https://bbs.archlinux.org/viewtopic.php?id=109335
On 11/30/2010 06:40 PM, Thomas Dziedzic wrote:
I think should wait for a more stable Linux 2.6.36. Anyway can be a good period to test/review some pending patches in my git repo for archiso (install_dir branch). Also I want to add "ifcpu" patch from Thomas for archiso2dual, and push keyboard remap for syslinux and adding some options like (load memtest86+ using memdisk for machines with low-free ram below 640K). -- Gerardo Exequiel Pozzi \cos^2\alpha + \sin^2\alpha = 1
On Tue, 30 Nov 2010 22:46:17 -0300 Gerardo Exequiel Pozzi <vmlinuz386@yahoo.com.ar> wrote:
Hey, I've given the build environment some love. I know Thomas has been looking around there a bit, the whole directory layout, scripts, documentation etc should now be more clear. (see http://projects.archlinux.org/users/dieter/releng.git/) I'm in the process of building new testing images and fixing stuff as it breaks :) Other then archiso (which is Gerardo's/Thomas territory) I would like to suggest the following things to tackle, if anyone has the time to contribute it; as far as I'm concerned none of this is a necessity. - wireless support. this needs a (dialog-based?) config utility which we could integrate. but this depends on the netcfg/initscripts changes. - support for btrfs & nilfs: i just checked out the latest linux 2.6.37 code from git, in which these FS'es are still marked as experimental. anyone an idea when those will approx. become marked as stable? we should have new release images soon afterwards :) - it would also be nice to be able to automatically start specific aif automatic installations when booting from pxe. similar to what you can do with FAI, but that's more a personal wish, i doubt many users are waiting for that. see https://wiki.archlinux.org/index.php/DeveloperWiki:releng_roadmap for more.
If anyone feels like testing the current state of things, here you go: http://build.archlinux.org/isos/ built using the latest archiso-git, and current packages. haven't tested these images myself. Dieter
On 12/04/2010 05:30 PM, Dieter Plaetinck wrote:
One warning: core-dual is > 700MB, should use another profile for archiso2dual (-T split). -- Gerardo Exequiel Pozzi \cos^2\alpha + \sin^2\alpha = 1
On 12/05/2010 10:27 PM, Sven-Hendrik Haase wrote:
No, since lzma is still unsupported officially in Linux. Maybe in 2.6.38... -- Gerardo Exequiel Pozzi \cos^2\alpha + \sin^2\alpha = 1
On 06.12.2010 03:37, Gerardo Exequiel Pozzi wrote:
Weren't we going to use a patched kernel to enable ugly-code lzma for archiso only?
Another archiso testbuild http://build.archlinux.org/isos/2010.12.06/ Changes: - aif, libui-sh from git. aif contains some filesystem refactorings and nilfs2 support - inclusion of nilfs-utils - archiso2dual split profile instead of basic because dual core image was 759MiB, now 686MiB - remove joe from package lists, joe is not in repos anymore (not yet committed in archiso git) * known issues: - interactive filesystem configuration interface is broken. (autoprepare does work). sorry about this, will fix.. - nilfs-utils is not in core so doesn't get installed to target yet Dieter
On Mon, 6 Dec 2010 22:34:34 +0100 Dieter Plaetinck <dieter@plaetinck.be> wrote:
new testbuilds @ http://build.archlinux.org/isos/2010.12.07/ i fixed the remaining bugs (or so i hope) in AIF which i introduced by doing lots of refactoring on the filesystem code, as well as added some nice things (like lvm2 hook is now added to mkinitcpio.conf for non-root filesystems as well) I don't plan to make new builds until nilfs-utils is in core, new archiso patches get merged, etc. Dieter
On Tue, Dec 7, 2010 at 9:00 AM, Dieter Plaetinck <dieter@plaetinck.be> wrote:
how about pulling btrfs-progs into core as well? and on that note, where does btrfs stand in terms of AIF? i have a massively awesome update to mkinitcpio-btrfs (actually a rewrite as `mkinitcpio-btrfsadm`, and a new tool `btrfsadm`) in the pipeline, about 3-7 days from release, and i'd love to see some btrfs-prog support already on the new disc (or is it already there?). new hook supports highly anticipated features like: ) multi drive (RAID array) btrfs roots ) transparent, full system, kernel level snapshots and rollbacks ) multiple/parallel independent roots (writable system volumes) ) automatic history snapshots (every time you reboot -- autosnap) ) cron based snapshots (via the `btrfsadm` tool in the crontab) ) user snapshots (via the `btrfsadm` tool manually) ) simple, reliable, automatic stagnant snapshot pruning ) automatic snapshot enumeration for extlinux (only for single device + detached boot) ) TODO: hot spare support the new on disk layout is vastly improved... rollbacks and snapshots are a matter of moving symlinks, and determining stagnant snapshots is a single command. the whole process is incredibly simple and error resistant. i bring it up here mainly because i would reeeeeally like to see AIF (and archboot and friends) creating the necessary structure by default; it _heavily_ mirrors the method git uses to manage branches. the structure could be created and the active subvol marked as default... at this point the user would not even know, core, or be affected by the underlying structure. however, the hook introduces the concept of a `bootramfs` -- a kernel + initramfs based bootloader -- and i'm not sure how that can officially mesh with Arch. this means a 2-stage boot unless the user has extlinux and is using a "detached" boot scheme (not important, explained in a later email). i'll go into more detail in the coming days; just looking for a general feel from the release team regarding btrfs in general; at the very least, if it's not already, i'd like to see the btrfs-progs be included. C Anthony
Am Tue, 7 Dec 2010 09:35:29 -0600 schrieb C Anthony Risinger <anthony@extof.me>:
how about pulling btrfs-progs into core as well? and on that note, where does btrfs stand in terms of AIF?
I'd like to see btrfs in [core] and AIF, too. While we're on it, what's about supporting GPT (GUID Partition Table) in AIF? Since the harddisks are getting bigger nowadays it will soon be necessary as far as I know, because an MBR partition can only have a size of 2 TB. Heiko
On Tue, 7 Dec 2010 17:19:23 +0100 Heiko Baums <lists@baums-on-web.de> wrote:
I'm not very familiar with this topic. Nor am i familiar with the related topic of block boundary sizes and stuff. Although tpowa tried to explain to me, this is what I wrote down: * don't use sfdisk or cfdisk, they don't support 1MB bootsectors (which is a new standard for 4k sector drives, better alignment and win7) * parted and fdisk do. * mbr is soon dead * https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues * parted is also good for GPT and uefi usage * fdisk does not support GPT * but parted has no (ncurses) UI so yeah, lets figure out if this info is still up to date, and how we should proceed. it looks like we shoud just use parted, if we can find/make an ncurses frontend for it, all problems should be magically solved. Dieter
On 12/07/2010 10:54 AM, Dieter Plaetinck wrote:
A good first step that would put Arch beyond other distros from what I've seen would be support for non-boot GPT partitions in the installer via parted. -JT
Am Tue, 07 Dec 2010 11:06:45 -0700 schrieb "John T. Wilkinson" <john.t.wilkinson@dartmouth.edu>:
Not quite right as far as I know. It was primarily intended for EFI/UEFI, but now runs with BIOS, too. But first installing a system and then make GPT partitions wouldn't make much sense particularly if you need to install the system onto a partition bigger than 2 TB even if this partition only contains /home.
There should not only non-boot GPT partitions. It should be possible to be able to building the whole partitioning scheme as GPT. I don't know if it would be possible to mix MBR and GPT partitions anyway. Heiko
On 12/07/2010 12:06 PM, Heiko Baums wrote: this does not always work (see http://www.rodsbooks.com/gdisk/bios.html for example), hence my statement that booting from GPT with a bios is not "well supported" currently. that is used for data. -JT
Am Tue, 7 Dec 2010 18:54:29 +0100 schrieb Dieter Plaetinck <dieter@plaetinck.be>:
I'm not familiar with GPT, too. I got to know it at FrOSCon 2010 at the FreeBSD booth. Nevertheless it looked quite interesting and is necessary for partitions bigger than 2 TB. I don't know its usability, but there's a CLI tool for GPT: gdisk resp. GPT fdisk http://www.rodsbooks.com/gdisk/index.html http://sourceforge.net/projects/gptfdisk/ It's already in [extra] and on the LiveCDs Parted Magic (http://partedmagic.com) and SystemRescueCD (http://www.sysresccd.org). Parted has no ncurses GUI but can be used from the command line. I'll search for GPT tools again later. Heiko
On 12/07/2010 12:59 PM, Heiko Baums wrote:
I've been using gdisk (specifically sgdisk) for a while now to do Arch Linux installs on GPT partitioned disks. I use Syslinux for booting as that has good GPT support for legacy (non UEFI) BIOSes. I submitted patches (which were accepted and are in sgdisk 0.6.13) to the gdisk project to add attribute bit setting in sgdisk (needed to boot from a GPT partition). Since my installs are scripted I don't need an ncurses front end. My disk setup script also takes over the whole disk (I provide it with partition sizes in a text file), which is not a limitation of AIF from what I've seen. My disk setup script script also installs Syslinux on the /boot partition and makes it able to boot. I developed my own setup/install scripts back in May of this year, primarily because AIF did not support GPT and Syslinux, but I may be able to contribute to AIF development is desired so that I can use AIF instead of my custom set up scripts. (I've not looked in AIF that much, I do some automated scripted post install configuration and pull in other files which I don't know if AIF supports, I need to read up on it more). All this to say I would be a lot more interested in AIF if it had full GPT and Syslinux support, and I'd be willing to contribute to AIF development if necessary to make that happen if AIF would end up meeting my needs. -- Dwight Schauer
On Tue, 7 Dec 2010 09:35:29 -0600 C Anthony Risinger <anthony@extof.me> wrote:
where does btrfs stand in terms of AIF?
currently no btrfs support whatsoever. let me share some aif design insights with you. my acronyms: LV = logical volume VG = volume group PV = physical volume. DM = device mapper BD = block device FS = file system DF = devicefile "normal" FS'es ("do something on the BD represented by DF /dev/foo, so that you can then call `mount /dev/foo <somedir>`") are _trivial_ to add to aif. how aif works is this: it uses a "model" that represents how your DF/FS structure will look like. i personally usually have a layout like this: a boot partition, and a partition on which i do dm_crypt, which results in a DM BD, which I make a PV, then put a VG on it, which contains multiple LV's, one for swap, and two containing some FS'es which get mounted as / and /home. you can see that model on the bottom of this file: https://github.com/Dieterbe/aif/blob/master/examples/fancy-install-on-sda you probably have noticed in the installer how you first configure all your filesystems in the dialog interface, but only after confirming, it does all the required actions, step by step. since it also supports automatic installs where you define your FS's in abitrary order, aif figures out the dependencies and processes things in the right order. (in my example: first create the dm_crypt, then the PV, then the VG, then the LV's, then the FS'es on those LV's) then mount all mountpoints in the right order( first /, then /home and /boot) I choose this model-based approach initially because I wanted to get rid of the ugly, hacky original installer code, but still provide a lot of control through the nice dialog interfaces. (and i wanted to allow automautic installs where you could just specify how you wanted your FS/BD structure to look like, not a series of commands) advantages: - provides some abstraction, it's easy to add new (simple) filesystems. support for "buildup" and rollback comes for free (for simple filesystems) - makes dialog-based "configurator" a bit easier. disadvantages: - the more control you want to give users, the more you're just putting effort in wrapping commandline arguments in fancy dialog interfaces (although there is also a textbox where you can enter which ever additional arguments you want, so this is a compromise) - pretty hard to implement fancier filesystems, you usually need to take the common use cases. (read on) - bash datastructures are very limited. it's not easy to model such a datastructure (which, if you simplify things is a tree, but in real life leafs can have multiple parents, like a VG can be using multiple PV's). so quite some code is needed to update and parse text files to mimic the datastructure, although I do consider using a specific optimized text format and an external tool to update/query the data. (https://bugs.archlinux.org/task/15640) - users cannot do their own stuff outside aif and expect to see the results inside aif. here are some examples why this design can lead to complicated code. - since all the modeling in the UI happens first, and the actual applying is later, this means you need to add code that detects "okay, you just added a dm_crypt, we need to ask a label, okay now we use that label to write a 'fake' /dev/mapper/<label> to the file which will appear in the menu". - for PV's, you need a way to differentiate in the menu between the real BD and the actual PV, so i write an entry as <PV DF>+ to the file. the '+' suffix represents the actual PV on which you can put VG's. - for VG's, which can contain 0 to n LV's, I needed to hack/extend my model for a "BD" to support multiple "FS"'es at once. (line 34 in the textfile) - a VG can actually span multiple PV's, but that would be really hard to implement (and not often used) so I just ignored that. - consider all these complications, then consider what needs to happen when something goes wrong and aif needs to do a rollback the rollback does the inverse of what i described earlier (umount all filesystems in the right order, then destroy the devices which need wiping/destroying in the right order). Cool feature but requires a bunch of hard to maintain code and I doubt it's used often. Because of all this, I have sweared quite a bit over the last few years. And thought about maybe we should KISS and let users do everything on the commandline, or provide a minimal layer of abstraction, like provide some scripts which they can modify that setup a system in a certain way. (for example, a series of mkfs; mount;pacman; calls) I guess it's a tradeoff between making it easy for users and not overloading the brain of people who want to hack on aif. This is why softraid hasn't been implemented yet, nor btrfs. Also, I'm not very familiar with either one (although I am pretty interested in btrfs). I would need to know the most common/recommended use cases, and figure out the best way to implement them. Or maybe just provide a few predefined wizards for specific setups (but there are so many possibilities this would be unfeasible, I think) Or I need to take a different approach (see a bit above) maybe we can do btrfs in a reasonably unpainful way, what are the common ways to set it up? (or at least as starting point, knowing you can further add snapshots and whatnot without worrying about that in aif) if we can do it in a similar way like i did with lvm (don't a "thing" spanning multiple things "below it", but allow multiple things "on top of it") it could be pretty easy actually, but btrfs seems so advanced I don't like to only provide half-baked support for it. maybe we should get in touch on IRC or IM to discuss possible approaches. your initcpio support definitely looks cool (although that's not really an arch-releng topic)
at the very least, if it's not already, i'd like to see the btrfs-progs be included.
that would be trivial. but not on my agenda yet. on any archiso medium you can just do `pacman -Sy btrfs-progs` (if you have networking) Dieter
i have some other responses to this message, but this for now... On Tue, Dec 7, 2010 at 11:23 AM, Dieter Plaetinck <dieter@plaetinck.be> wrote:
what if we created some kind of udev/blkid/etc. approach? ie. we write some custom udev rules to manage a special directory/update files/touch files/run scripts/etc... as the system changes udev would make sure AIF knows what the actual state is. i'm not super adept in the rules syntax, but i know enough to be dangerous. it seems like we could leverage it in some way to take care of all the dirty work... AIF just needs to monitor the <insert here>, and verify against an identical copy it creates during the install. this would also let AIF adapt to any outside changes made my the user with some grace. C Anthony
On Tue, 7 Dec 2010 14:34:02 -0600 C Anthony Risinger <anthony@extof.me> wrote:
interesting idea. but if aif really wants to know the current state, it can just query (fdisk -l, ls /dev/mapper/*, <btrfs command>, etc etc)
i'm not super adept in the rules syntax, but i know enough to be dangerous.
the syntax is a bit ugly right now, to keep aif code relatively simple.
let's not create additional problems. if you suggest to make aif "spot" changes made by the user, it still needs to be able to work with that, so there needs to be a model for it. we can better implement the model and dialogs for btrfs support to work, without worrying about the "detecting realtime changes" if we don't really need it. let me know if my previous explanation was clear, and if you want more info about something. Dieter
On Tue, Dec 7, 2010 at 3:15 PM, Dieter Plaetinck <dieter@plaetinck.be> wrote:
true, but that is synchronous, and dependent on polling and specific tools which may or may not have feature parity. udev would provide near complete abstraction to all interested block devices, in a unified and predictable way. we could set traps in AIF, and use udev to send signals when devices change. AIF would become an event driven framework.
yeah i know it's a little strange :-), but it makes enough sense, and is powerful enough to do all the cool things we rely on it for nowadays.
the whole real time thing was more of a bonus [future] effect of tying to udev directly. i think we could implement the model by looking to your examples about LVM, because btrfs is very similar: 1) enumerate physical devices (create partitions/etc) 2) enumerate devices nodes (LVM:PVs -- btrfs:"devid") [devices/partitions] 3) enumerate block pools (LVM:VGs -- btrfs:FS itself) [maybe md here even?] 4) enumerate logical volumes (LVM:LVs -- btrfs:subvolumes) .... 10) Profit! simple FS's would have 1-1-1 mappings, whereas the more complex ones would be otherwise. this is why i said make udev build directories, and touch/edit files... we dont need a bash data structure, we could use the tmpfs itself, something like: /devices /devices/managed/.... /devices/managed/target0/.... /devices/unmanaged/sde3/.... (terrible example :-) udev could maintain file indexes or anything else we need. AIF would behave similar to the systemd, in having high level triggers (link PV 1,2,3 -- VG 100% -- LV 40/60%) cascade into lower level instructions that understand their own dependencies and fulfill the original request; decoupling the process a bit. AIF then waits for (and expects a particular) outcome back from udev. ie. after being signaled by udev, AIF expect <N> nodes to exist under "managed", named <label>, with pointers to <UUID> just some thoughts, with plenty of holes i'm sure. it's to hard to encode all the various implementation details into AIF; let the apps made for these things just do what they do best. this feels interesting to me so ill try to hash out a prototype int the coming weeks :-) C Anthony
On Tue, 7 Dec 2010 16:05:08 -0600 C Anthony Risinger <anthony@extof.me> wrote:
okay, i get you now. *if* we would want to do this, this seems like an interesting approach. although we could also create a "rescan real system" function which you could trigger from a menu entry, with a small user inconvenience you could simplify this considerable. however I still think none of this is needed.
okay so instead of a text file containing FS structure definitions and FS properties, you would model the FS structure definition with a directory tree (and probably still keep FS properties in text files). this approach isn't better then what we have now. (esp. not when you consider automated installations, where you prefer specifying your FS structure definition as plaintext) also this does not support multiple parents (like a VG over multiple PV's) i'm still more in favor of using an optimised plaintext format (like yaml or whatever) and using a commandline (yaml) program to query/manipulate the data.
the main reason the current model is needed is because aif allows a user interface that lets the user "build" his FS structure conceptually, before anything is really done. (i.e. if user says "i want to encrypt /dev/foo and give it label bar", aif knows it needs to create a /dev/mapper/bar in the UI so that the user can define something else (like lvm or a simple FS ) on top of that, and so on. Your approach can't help here, because it only works after certain FS's are actually being created. I don't see which problem your proposal solves. Dieter
On 12/05/2010 11:40 PM, Sven-Hendrik Haase wrote:
-- Gerardo Exequiel Pozzi \cos^2\alpha + \sin^2\alpha = 1
On Mon, 06 Dec 2010 20:29:25 -0300 Gerardo Exequiel Pozzi <vmlinuz386@yahoo.com.ar> wrote:
with -T split we went from 759MiB to 687MiB. so that's 72MiB. if lzma can gain 54M(i)B that's pretty nice as well, and it shrinks all images, not just dual. Btw Gerardo, in your archiso2dual readme, can you document the downsides of -T split? Afaik having the shared usershare.sqfs is not a problem, right? And I just noticed with -T split /lib/modules are still separate sqfs for both architectures. what advantage does this give? Dieter
participants (8)
-
C Anthony Risinger
-
Dieter Plaetinck
-
Dwight Schauer
-
Gerardo Exequiel Pozzi
-
Heiko Baums
-
John T. Wilkinson
-
Sven-Hendrik Haase
-
Thomas Dziedzic