[arch-general] Source control on /etc
What is considered the Arch way to have version control over the configs in /etc? I would like to be able to see at least a few changes back in my config history at the minimum. I have seen the package etckeeper and it does not seem to really fully be setup to work with pacman. Both AUR packages are very outdated. Would I just be best off just copying the ones I change and then push the changes to a separate dir that is under control of say git? What methods do you employ? Thanks for any info or tips on this. :)
Hello Don, Excerpts from Don deJuan's message of Thu Feb 23 07:35:52 +0100 2012:
What is considered the Arch way to have version control over the configs in /etc? I would like to be able to see at least a few changes back in my config history at the minimum.
I too keep my /etc directory under version control. I have a a detached worktree. Wich enables me to have the .git directory outside of /etc. The process is simple: You create a bare repo: $ mkdir etc.git $ git init --bare Now lets congigure it to chek the files elsewhere: $ git config core.worktree /etc And export these vars to you current session $ export GIT_DIR=/path/to/etc.git $ export GIT_WORK_TREE=/etc Tip here a script[1] easy to work it. Just rember to run it with "." or "source" *not* with "sh" since it open another bash session and kills it when script is done. Now you would be able to git add and git commit in your etc while keeping it clean. :)
I have seen the package etckeeper and it does not seem to really fully be setup to work with pacman. Both AUR packages are very outdated. etckeeper doesn't really fit pacman cause pacman doesn't merge files automatically, only apt does that (if you silly enough to configure it to do that :p ). Also etckeeper commits all the files in /etc wich makes quite dummy commits. They not really resetable... I use use it on debian server only as the last resource.
The Arch way is quite simpler, every time you merge a pacnew or add a feature to a config file you commit it and keep the same workflow as a normal code repo. Much simpler.
Would I just be best off just copying the ones I change and then push the changes to a separate dir that is under control of say git? What methods do you employ? Well this is kinda hard to do (believe me i tried) Also having the .git on /etc and other dirs like $HOME is quite anoying since i get the (branch) in red on my bash prompt[2].
I hope this can help you. [1] https://github.com/masterkorp/Home-files/blob/master/scripts/export_git.sh [2] https://github.com/masterkorp/Home-files/blob/master/.bashrc -- Regards, Alfredo Palhares
On 02/22/2012 11:48 PM, Alfredo Palhares wrote:
Hello Don,
Excerpts from Don deJuan's message of Thu Feb 23 07:35:52 +0100 2012:
What is considered the Arch way to have version control over the configs in /etc? I would like to be able to see at least a few changes back in my config history at the minimum.
I too keep my /etc directory under version control. I have a a detached worktree. Wich enables me to have the .git directory outside of /etc. The process is simple:
You create a bare repo: $ mkdir etc.git $ git init --bare Now lets congigure it to chek the files elsewhere: $ git config core.worktree /etc And export these vars to you current session $ export GIT_DIR=/path/to/etc.git $ export GIT_WORK_TREE=/etc
Tip here a script[1] easy to work it. Just rember to run it with "." or "source" *not* with "sh" since it open another bash session and kills it when script is done.
Now you would be able to git add and git commit in your etc while keeping it clean. :)
I have seen the package etckeeper and it does not seem to really fully be setup to work with pacman. Both AUR packages are very outdated. etckeeper doesn't really fit pacman cause pacman doesn't merge files automatically, only apt does that (if you silly enough to configure it to do that :p ). Also etckeeper commits all the files in /etc wich makes quite dummy commits. They not really resetable... I use use it on debian server only as the last resource.
The Arch way is quite simpler, every time you merge a pacnew or add a feature to a config file you commit it and keep the same workflow as a normal code repo. Much simpler.
Would I just be best off just copying the ones I change and then push the changes to a separate dir that is under control of say git? What methods do you employ? Well this is kinda hard to do (believe me i tried) Also having the .git on /etc and other dirs like $HOME is quite anoying since i get the (branch) in red on my bash prompt[2].
I hope this can help you.
[1] https://github.com/masterkorp/Home-files/blob/master/scripts/export_git.sh [2] https://github.com/masterkorp/Home-files/blob/master/.bashrc
Awesome!! Thanks for that, I will have to read this throughly and give it a shot in the morning. I have read a few snips on detached worktree and was unsure if that was ideal for my want. I will post back if I have more questions and again thanks for the in depth answer :D
Le jeudi 23 février 2012 à 08:48 +0100, Alfredo Palhares a écrit :
Hello Don,
You create a bare repo: $ mkdir etc.git $ git init --bare Now lets congigure it to chek the files elsewhere: $ git config core.worktree /etc And export these vars to you current session $ export GIT_DIR=/path/to/etc.git $ export GIT_WORK_TREE=/etc
hi all, "git config core.worktree /etc" is not really needed in your setup I would like to suggest to use alias instead of env var. This way you can work easily on multiple git repo in the same shell for example alias etc-git='git --git-dir=/path/to/etc.git --work-dir=/etc' alias home-git='git --git-dir=/path/to/home.git --work-dir=$HOME' #just check that $HOME is defined and run etc-git add /etc/pacman.conf etc-git rc.conf home-git add anyfileinhome home-git commit -a etc-git commit -a place the aliases in your .bashrc or aliases file
Excerpts from solsTiCe d'Hiver's message of Thu Feb 23 11:55:11 +0100 2012:
"git config core.worktree /etc" is not really needed in your setup True.
I would like to suggest to use alias instead of env var. This way you can work easily on multiple git repo in the same shell
for example alias etc-git='git --git-dir=/path/to/etc.git --work-dir=/etc' alias home-git='git --git-dir=/path/to/home.git --work-dir=$HOME' #just check that $HOME is defined This is nice, i used this for a while, but i got used to my git aliases[1] (eg gc="git commit") that i always failed the commands :P So i just use this little scrpit[2]
Either way is very cool indeed. [1] https://github.com/masterkorp/Home-files/blob/master/.bash.d/aliases.bash [2] https://github.com/masterkorp/Home-files/blob/master/scripts/export_git.sh
On 02/23/2012 12:48 AM, Alfredo Palhares wrote:
Hello Don,
Excerpts from Don deJuan's message of Thu Feb 23 07:35:52 +0100 2012:
What is considered the Arch way to have version control over the configs in /etc? I would like to be able to see at least a few changes back in my config history at the minimum.
I too keep my /etc directory under version control. I have a a detached worktree. Wich enables me to have the .git directory outside of /etc. The process is simple:
You create a bare repo: $ mkdir etc.git $ git init --bare Now lets congigure it to chek the files elsewhere: $ git config core.worktree /etc And export these vars to you current session $ export GIT_DIR=/path/to/etc.git $ export GIT_WORK_TREE=/etc
Tip here a script[1] easy to work it. Just rember to run it with "." or "source" *not* with "sh" since it open another bash session and kills it when script is done.
Now you would be able to git add and git commit in your etc while keeping it clean. :)
I have seen the package etckeeper and it does not seem to really fully be setup to work with pacman. Both AUR packages are very outdated. etckeeper doesn't really fit pacman cause pacman doesn't merge files automatically, only apt does that (if you silly enough to configure it to do that :p ). Also etckeeper commits all the files in /etc wich makes quite dummy commits. They not really resetable... I use use it on debian server only as the last resource.
The Arch way is quite simpler, every time you merge a pacnew or add a feature to a config file you commit it and keep the same workflow as a normal code repo. Much simpler.
Would I just be best off just copying the ones I change and then push the changes to a separate dir that is under control of say git? What methods do you employ? Well this is kinda hard to do (believe me i tried) Also having the .git on /etc and other dirs like $HOME is quite anoying since i get the (branch) in red on my bash prompt[2].
I hope this can help you.
[1] https://github.com/masterkorp/Home-files/blob/master/scripts/export_git.sh [2] https://github.com/masterkorp/Home-files/blob/master/.bashrc
What about permissions and ownership? These are pretty important for /etc.
Excerpts from Matthew Monaco's message of Thu Feb 23 17:08:46 +0100 2012:
What about permissions and ownership? These are pretty important for /etc. What about those ? Git doesn't care about permissions, the only permissions that git stores is the executable bit.
-- Regards, Alfredo Palhares
Alfredo Palhares, Thu 2012-02-23 @ 17:24:01+0100:
What about permissions and ownership? These are pretty important for /etc. What about those ? Git doesn't care about permissions, the only
Excerpts from Matthew Monaco's message of Thu Feb 23 17:08:46 +0100 2012: permissions that git stores is the executable bit.
But that's problematic for files in /etc, many of which require specific ownership or mode bits set/unset. You don't want your VCS to elide the fact that /etc/shadow should only be readable by root, for instance. I don't think Git will change permissions on existing files in your working directory, but if you ever cloned your /etc repo onto another machine, the permissions would be screwed up.
On 02/23/2012 05:36 PM, Taylor Hedberg wrote:
Alfredo Palhares, Thu 2012-02-23 @ 17:24:01+0100:
What about permissions and ownership? These are pretty important for /etc. What about those ? Git doesn't care about permissions, the only
Excerpts from Matthew Monaco's message of Thu Feb 23 17:08:46 +0100 2012: permissions that git stores is the executable bit.
But that's problematic for files in /etc, many of which require specific ownership or mode bits set/unset. You don't want your VCS to elide the fact that /etc/shadow should only be readable by root, for instance.
I don't think Git will change permissions on existing files in your working directory, but if you ever cloned your /etc repo onto another machine, the permissions would be screwed up.
Hi we're using http://joey.kitenet.net/code/etckeeper/ for that purpose greets, Dennis
On 02/23/2012 05:36 PM, Taylor Hedberg wrote:
Alfredo Palhares, Thu 2012-02-23 @ 17:24:01+0100:
What about permissions and ownership? These are pretty important for /etc. What about those ? Git doesn't care about permissions, the only
Excerpts from Matthew Monaco's message of Thu Feb 23 17:08:46 +0100 2012: permissions that git stores is the executable bit.
But that's problematic for files in /etc, many of which require specific ownership or mode bits set/unset. You don't want your VCS to elide the fact that /etc/shadow should only be readable by root, for instance.
I don't think Git will change permissions on existing files in your working directory, but if you ever cloned your /etc repo onto another machine, the permissions would be screwed up.
Hi
we're using
http://joey.kitenet.net/code/etckeeper/
for that purpose
greets, Dennis Have you had any issues? The AUR packages for that are way out of date and probably abandoned by
On 02/23/2012 08:43 AM, Dennis Börm wrote: the owners. From what I read it starting giving people issues when pacman 3 came out and now that we are on 4 with even more features I would think a bunch of work would have to be done to make work properly for Arch.
On Thu, 23 Feb 2012 17:43:58 +0100 Dennis Börm <allspark@planetcyborg.de> wrote:
we're using
http://joey.kitenet.net/code/etckeeper/
for that purpose
I see it's available via AUR, but, afaik, it does not support Arch's pacman due to lack of hook support in Pacman. Can you share some light how do you use etckeeper on Arch? I'd probably use it with bzr (treeless repo + lightweight checkout). Sincerely, Gour -- As a blazing fire turns firewood to ashes, O Arjuna, so does the fire of knowledge burn to ashes all reactions to material activities. http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810
Dear all, My /, /boot and /usr are on a BTRFS /(except /boot on ext2) on a ssd. I want to add this line in my fstab to avoid too many writings on my ssd : *tmpfs /var/log tmpfs nodev,nosuid,noexec 0 0* So each time I reboot, my /var/log will be emptied, which could be a problem in case of serious issue on my box. I was then thinking of a way to backup this folder before I shutdown. I found this trick in the Arch forum: add this in my */etc/rc.local.shutdown*: |*echo "Copying LOGs..." now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/* My ~ folder is on another HD. Will this script be enough to do the job? TY for advising. |
[2012-06-20 10:36:17 +0200] Arno Gaboury:
My /, /boot and /usr are on a BTRFS /(except /boot on ext2) on a ssd. I want to add this line in my fstab to avoid too many writings on my ssd :
*tmpfs /var/log tmpfs nodev,nosuid,noexec 0 0*
So each time I reboot, my /var/log will be emptied, which could be a problem in case of serious issue on my box. I was then thinking of a way to backup this folder before I shutdown. I found this trick in the Arch forum:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job?
Why are you inflicting such a complicated setup on yourself if you cannot understand what those three little lines of shell do? That seems to me like a completely backward way of taking the learning curve... Also, please create new threads instead of hijacking random ones. Cheers. -- Gaetan
On 06/20/2012 11:03 AM, Gaetan Bisson wrote:
[2012-06-20 10:36:17 +0200] Arno Gaboury:
My /, /boot and /usr are on a BTRFS /(except /boot on ext2) on a ssd. I want to add this line in my fstab to avoid too many writings on my ssd :
*tmpfs /var/log tmpfs nodev,nosuid,noexec 0 0*
So each time I reboot, my /var/log will be emptied, which could be a problem in case of serious issue on my box. I was then thinking of a way to backup this folder before I shutdown. I found this trick in the Arch forum:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job? Why are you inflicting such a complicated setup on yourself if you cannot understand what those three little lines of shell do? That seems to me like a completely backward way of taking the learning curve...
Also, please create new threads instead of hijacking random ones.
Cheers.
Ok for the new threads, please excuse my n00biness (and lazyness) about this issue. I fully understand these 3 lines, and thought it was not a complicated way to back up /var/log at shutdown. But as it seems there is much simplier way, I will then investigate. Cheers.
[2012-06-20 11:08:37 +0200] Arno Gaboury:
I fully understand these 3 lines, and thought it was not a complicated way to back up /var/log at shutdown. But as it seems there is much simplier way, I will then investigate.
I did not say those three lines were complicated; quite the opposite. My point was that if you cannot understand them, you should not be doing anything involving partitioning, experimental file systems such as BTRFS, etc. But since you do understand them, you can determine for yourself whether they actually are an acceptable solution to your problem. Cheers. -- Gaetan
On 06/20/2012 11:23 AM, Gaetan Bisson wrote:
I fully understand these 3 lines, and thought it was not a complicated way to back up /var/log at shutdown. But as it seems there is much simplier way, I will then investigate. I did not say those three lines were complicated; quite the opposite. My
[2012-06-20 11:08:37 +0200] Arno Gaboury: point was that if you cannot understand them, you should not be doing anything involving partitioning, experimental file systems such as BTRFS, etc.
But since you do understand them, you can determine for yourself whether they actually are an acceptable solution to your problem.
Cheers.
It seems to me now that the*asd* daemon is a better and cleaner solution.
On 06/20/2012 11:03 AM, Gaetan Bisson wrote:
[2012-06-20 10:36:17 +0200] Arno Gaboury:
My /, /boot and /usr are on a BTRFS /(except /boot on ext2) on a ssd. I want to add this line in my fstab to avoid too many writings on my ssd :
*tmpfs /var/log tmpfs nodev,nosuid,noexec 0 0*
So each time I reboot, my /var/log will be emptied, which could be a problem in case of serious issue on my box. I was then thinking of a way to backup this folder before I shutdown. I found this trick in the Arch forum:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job? Why are you inflicting such a complicated setup on yourself if you cannot understand what those three little lines of shell do? That seems to me like a completely backward way of taking the learning curve...
Also, please create new threads instead of hijacking random ones.
Cheers.
After hours of reading, here is the correct script to add in my rc.shutdown.local. I know this method will NOT work in case of system crash, but it is OK for me as I understand what i am doing and it is simple. *echo -n "Copying /var/log ..." cd /home/gabx/backup tar -zcf "./`date +'%d-%b-%y.tgz'`" /var/log echo " done."* I think it is much more simple as the one I mentioned earlier, as Gaetan pointed out. Cheers.
On Wed, Jun 20, 2012 at 7:34 AM, Arno Gaboury <arnaud.gaboury@gmail.com> wrote:
tar -zcf "./`date +'%d-%b-%y.tgz'`" /var/log
This date format will have you cursing yourself if you shut down more than once in the same day. --Andrew Hills
On 20/06/12 22:22, Andrew Hills wrote:
On Wed, Jun 20, 2012 at 7:34 AM, Arno Gaboury <arnaud.gaboury@gmail.com> wrote:
tar -zcf "./`date +'%d-%b-%y.tgz'`" /var/log
This date format will have you cursing yourself if you shut down more than once in the same day.
I think he will be cursing himself when he realises the logs tend to be most useful after a system crash and using this method he will not have any...
On 06/20/2012 02:36 PM, Allan McRae wrote:
On 20/06/12 22:22, Andrew Hills wrote:
tar -zcf "./`date +'%d-%b-%y.tgz'`" /var/log This date format will have you cursing yourself if you shut down more
On Wed, Jun 20, 2012 at 7:34 AM, Arno Gaboury <arnaud.gaboury@gmail.com> wrote: than once in the same day.
I think he will be cursing himself when he realises the logs tend to be most useful after a system crash and using this method he will not have any...
Will this following line avoid me cursing myself when multi reboot in a day? *tar -zcf "./`date +'%D-%H-%M.tgz'`" /var/log *AS for backing up the log after a system crash, you are perfectly right. Now I am trying to find something I understand to replace this "dirty" script in my rc.local.shutdwon. Until I found the clean way, I will stick to it.
I didn't follow this thread, but IIRC I read something about "not to write to often to SSD"? So you take care regarding to noatime etc.? FWIW "Reliability and lifetime SSDs have no moving parts to fail mechanically. Each block of a flash-based SSD can only be erased (and therefore written) a limited number of times before it fails. The controllers manage this limitation so that drives can last for many years under normal use.[79][80][81][82][83] SSDs based on DRAM do not have a limited number of writes. Firmware bugs are currently a common cause for data loss.[citation needed] HDDs have moving parts, and are subject to potential mechanical failures from the resulting wear and tear." - Wiki You'll make your Linux less reliable by handling log files in a dirty way, to get a longer lifetime for a SSD drive? Nobody really knows how long they'll last, but it's said they have a longer lifetime than mechanical drives? Perhaps not a good idea.
On Wed, Jun 20, 2012 at 9:51 AM, Arno Gaboury <arnaud.gaboury@gmail.com> wrote:
*tar -zcf "./`date +'%D-%H-%M.tgz'`" /var/log
This will cause a different problem... maybe you wanted %F instead of %D. (Check "man date".) --Andrew Hills
On Wed, Jun 20, 2012 at 10:36:17AM +0200, Arno Gaboury wrote:
Dear all, [snip] TY for advising.
Hello Arno Sorry for the offtopic bit, but i have noticed that your threads tend to branch out from some other thread, many times. I believe that you use the reply button on an existing topic and then start your own thread. This looks very weird and is also confusing, as i think your message is related to the one you forked, while it is not. Please always start a new thread ;)
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote:
Hello Arno
Sorry for the offtopic bit, but i have noticed that your threads tend to branch out from some other thread, many times.
I believe that you use the reply button on an existing topic and then start your own thread. This looks very weird and is also confusing, as i think your message is related to the one you forked, while it is not.
Please always start a new thread ;)
For the mutt users, just press "#" to decouple the message from the parent thread. :-) </offtopic> Geert -- geert.hendrickx.be :: geert@hendrickx.be :: PGP: 0xC4BB9E9F This e-mail was composed using 100% recycled spam messages!
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote: For the mutt users, just press "#" to decouple the message from the parent thread. :-)
Not in my .muttrc. What option is this? I might have unbound the key for some reason :)
</offtopic>
Manolo
On Wed, Jun 20, 2012 at 16:14:46 -0400, Manolo Martínez wrote:
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote: For the mutt users, just press "#" to decouple the message from the parent thread. :-)
Not in my .muttrc. What option is this? I might have unbound the key for some reason :)
break-thread. The inverse is "&" link-thread (to fix messages from stupid MUA's that don't include an In-Reply-To header). Geert -- geert.hendrickx.be :: geert@hendrickx.be :: PGP: 0xC4BB9E9F This e-mail was composed using 100% recycled spam messages!
On 06/20/2012 10:21 PM, Geert Hendrickx wrote:
On Wed, Jun 20, 2012 at 16:14:46 -0400, Manolo Martínez wrote:
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote: For the mutt users, just press "#" to decouple the message from the parent thread. :-) Not in my .muttrc. What option is this? I might have unbound the key for some reason :)
break-thread.
The inverse is "&" link-thread (to fix messages from stupid MUA's that don't include an In-Reply-To header).
Geert
Another day with something new : the Mutt command line MUA. Never boring when Linuxing. Will have a look at it then.
Manolo Martínez, Wed 2012-06-20 @ 16:14:46-0400:
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote: For the mutt users, just press "#" to decouple the message from the parent thread. :-)
Not in my .muttrc. What option is this? I might have unbound the key for some reason :)
The mutt function is called break-thread.
On Wed, Jun 20, 2012 at 09:51:03PM +0200, Geert Hendrickx wrote:
Hello Arno [snip] Please always start a new thread ;) For the mutt users, just press "#" to decouple the message from the parent
On Wed, Jun 20, 2012 at 14:38:27 +0530, gt wrote: thread. :-)
</offtopic>
Thanks for the advice. I knew about the feature, but never got around to using it.
There is a directory syncing daemon in AUR it does exactly what you want. Dir-sync-daemon is the name if I remember correctly.
On Wed, Jun 20, 2012 at 10:36:17AM +0200, Arno Gaboury wrote:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job?
Depends on your job description. ;) If your system crashes (hah, as if ever!) or becomes unresponsive, you're screwed, as rc.local.shutdown is likely not called and your logs are lost after reboot. This is probably not what you want. If you've got suitable network infrastructure, you may want to instruct syslog-ng to forward the logs to a remote logging daemon in addition to the local ramdisk. This is nice and clean, given a stable network connection to a suitable machine to work as a logging daemon. Just a desktop machine or no independent server available? It may be enough for your purposes to use logrotate to copy over the logs properly to a safe mass storage device in regular intervals. Maybe hourly. This, however, won't help you much in case of kernel panic, either. It's better than rolling your own regular rotation with cron and shell, though. Feeling old-school? Setup a printer to receive critical(!) logs. For bonus points, use a 9-pin matrix printer. If you hear it screeching, you know something's horribly wrong. Free notification to boot! Alternatively, if you need critical(!) log messages even after a crash, you may want to configure syslog-ng to only(!) log such critical messages directly to a file on you mass storage device. Output should be scarce as not to unduly put your flash memory under stress. Lots of possibilities. Choose wisely. Best regards, Dennis -- "Den Rechtsstaat macht aus, dass Unschuldige wieder frei kommen." Dr. Wolfgang Schäuble, Bundesinnenminister (14.10.08, TAZ-Interview) 0D21BE6C - F3DC D064 BB88 5162 56BE 730F 5471 3881 0D21 BE6C
On Wed, Jun 20, 2012 at 11:36 AM, Dennis Herbrich <dennis@archlinux.org> wrote:
On Wed, Jun 20, 2012 at 10:36:17AM +0200, Arno Gaboury wrote:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job?
Depends on your job description. ;)
If your system crashes (hah, as if ever!) or becomes unresponsive, you're screwed, as rc.local.shutdown is likely not called and your logs are lost after reboot. This is probably not what you want.
If you've got suitable network infrastructure, you may want to instruct syslog-ng to forward the logs to a remote logging daemon in addition to the local ramdisk. This is nice and clean, given a stable network connection to a suitable machine to work as a logging daemon.
Just a desktop machine or no independent server available? It may be enough for your purposes to use logrotate to copy over the logs properly to a safe mass storage device in regular intervals. Maybe hourly. This, however, won't help you much in case of kernel panic, either. It's better than rolling your own regular rotation with cron and shell, though.
Feeling old-school? Setup a printer to receive critical(!) logs. For bonus points, use a 9-pin matrix printer. If you hear it screeching, you know something's horribly wrong. Free notification to boot!
Alternatively, if you need critical(!) log messages even after a crash, you may want to configure syslog-ng to only(!) log such critical messages directly to a file on you mass storage device. Output should be scarce as not to unduly put your flash memory under stress.
Lots of possibilities. Choose wisely.
Best regards, Dennis
-- "Den Rechtsstaat macht aus, dass Unschuldige wieder frei kommen." Dr. Wolfgang Schäuble, Bundesinnenminister (14.10.08, TAZ-Interview)
0D21BE6C - F3DC D064 BB88 5162 56BE 730F 5471 3881 0D21 BE6C
If you use systemd with its accompanying log daemon (the journal), it does not write its logs to disk by default. It only does so if the directory /var/log/journal/ exists. Journald keeps volatile logs in /run/log/journal, which is on a tmpfs. If you create /var/log/journal/, you can use the MaxLevelStore= setting (/etc/systemd/journald.conf) to control which messages get written to disk, e.g. critical messages only. All other messages are still recorded in volatile memory. You can still run a classical syslog daemon and have the journal forward to it, implementing all the fancy network or printer stuff above. Of course, the problem with any syslog (or journal) approach is that it does not capture applications writing to /var/log/ directly.
On Wed, 20 Jun 2012 10:36:17 +0200 Arno Gaboury <arnaud.gaboury@gmail.com> wrote:
Dear all,
My /, /boot and /usr are on a BTRFS /(except /boot on ext2) on a ssd. I want to add this line in my fstab to avoid too many writings on my ssd :
*tmpfs /var/log tmpfs nodev,nosuid,noexec 0 0*
So each time I reboot, my /var/log will be emptied, which could be a problem in case of serious issue on my box. I was then thinking of a way to backup this folder before I shutdown. I found this trick in the Arch forum:
add this in my */etc/rc.local.shutdown*:
|*echo "Copying LOGs..."
now=`date +"%Y%m%d_%Hh%M"` mkdir -p /logs_backup/$now cp -Rp /var/log/* ~/backup/logs_backup/$now/*
My ~ folder is on another HD.
Will this script be enough to do the job?
TY for advising. |
Well, SSD's limited number of write cycles is largerly a myth these days (see www.toshiba.com/taec/news/media_resources/docs/SSDmyths.pdf). Of course it depends on your particular model/brand but practically, an SSD will most likely outlive your machine anyways. So I wouldn't worry about /var/log too much. However, optimizing log writes is a good idea even on an HDD. I think putting /var/log to RAM (as well as putting there firefox/thunderbird profiles) is stupid and is asking for trouble. A much better approach is to properly configure syslog-ng or rsyslog, specifically: 1. You don't have to write same log into messages.log, kernel.log etc... 2. If you have a univ. wifi with RADIUS, you most likely obtain lease every 15 min. this log (with level debug) goes into /var/log/dhcpcd.log. 3. Firewall logging is useful for network monitoring/debugging, but /var/log/iptables.log will grow huge on a public network. In cases 2 and 3 you can tell syslog to put the corresponding files into /tmp/log (I assume /tmp is already tmpfs) since this info is not really needed in the long term. -- Leonid Isaev GnuPG key: 0x164B5A6D Fingerprint: C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
On Wed, 2012-06-20 at 09:57 -0500, Leonid Isaev wrote
Well, SSD's limited number of write cycles is largerly a myth these days [snip]
A storage drive should be usable in quasi every way. We aren't talking about an USB stick or DVD RW ;). If you need tricks to enlarge the lifetime, than it's a useless device. I already quoted the Wiki regarding to the lifetime. It's said that they have a longer lifetime than modern hard disk drives usually have got. If they shouldn't last long, just because Linux does write to often log files and you have to use tricks and need an additional hard disc drive, than this new devices are crap. Again, what's about noatime etc.? The way they're handled might be important in a way it's important for HDDs too, e.g. does the FS require something comparable to M$ FS defragmentation? But if a user needs to take care about read and write cycles for a storage device IMO make the usage of a computer too complicated. This is a task for the FS, the device's controller or whatever. How often does we need a log file after a regular shutdown? If you copy them for shutdown, you simply can abandon those files completely. Just an opinion, Ralf
On Wed, 20 Jun 2012 17:20:55 +0200 Ralf Mardorf <ralf.mardorf@alice-dsl.net> wrote:
On Wed, 2012-06-20 at 09:57 -0500, Leonid Isaev wrote
Well, SSD's limited number of write cycles is largerly a myth these days [snip]
[...] The way they're handled might be important in a way it's important for HDDs too, e.g. does the FS require something comparable to M$ FS defragmentation?
Well, it all depends on a task at hand. Windows 7 is quite efficient on SSDs for a general purpose system. And NTFS, if properly configured, is at the same level of performance as ext4/btrfs maybe better, at least in my experience.
But if a user needs to take care about read and write cycles for a storage device IMO make the usage of a computer too complicated. This is a task for the FS, the device's controller or whatever.
Or OS :)
How often does we need a log file after a regular shutdown? If you copy them for shutdown, you simply can abandon those files completely.
System logs are always useful and must not be volatile.
Just an opinion, Ralf
-- Leonid Isaev GnuPG key: 0x164B5A6D Fingerprint: C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
On 06/20/2012 05:20 PM, Ralf Mardorf wrote:
On Wed, 2012-06-20 at 09:57 -0500, Leonid Isaev wrote
Well, SSD's limited number of write cycles is largerly a myth these days [snip]
A storage drive should be usable in quasi every way. We aren't talking about an USB stick or DVD RW ;). If you need tricks to enlarge the lifetime, than it's a useless device. I already quoted the Wiki regarding to the lifetime. It's said that they have a longer lifetime than modern hard disk drives usually have got. If they shouldn't last long, just because Linux does write to often log files and you have to use tricks and need an additional hard disc drive, than this new devices are crap. Again, what's about noatime etc.? The way they're handled might be important in a way it's important for HDDs too, e.g. does the FS require something comparable to M$ FS defragmentation? But if a user needs to take care about read and write cycles for a storage device IMO make the usage of a computer too complicated. This is a task for the FS, the device's controller or whatever. How often does we need a log file after a regular shutdown? If you copy them for shutdown, you simply can abandon those files completely.
Just an opinion, Ralf
OK guys. When I bought my ssd, I read too that this story of short lifetime is a myth. As it is now clear to me that writing /var/log into RAM is a totally fullish idea in case of crash, I am back to my original fstab, with no entry for /var/log. I will then take my time to understand rsyslog or syslog-ng. Ty all for your wise advises.
OK guys. When I bought my ssd, I read too that this story of short lifetime is a myth. As it is now clear to me that writing /var/log into RAM is a totally fullish idea in case of crash, I am back to my original fstab, with no entry for /var/log. I will then take my time to understand rsyslog or syslog-ng. Ty all for your wise advises.
Not exactly. It's true and as your filesystem fills up it becomes more of a problem. However modern drives such as with sandforce controllers reserve around 20% of the drive so that the problem is avoided for the lifetime of a drive. The picture of data preservation, reliability and shock between SSD and HDD also has many intricacies depending on your concerns. The picture is far from simply SSD rules in all situations except capacity/£. ________________________________________________________ Why not do something good every day and install BOINC. ________________________________________________________
I wrote a system backup program called "mime" that works similar to Apple's Time Machine on the back end. Basically each time you backup your system, another copy of your file system is available. Another program is installed with it called "lsmime" which is used to list, restore and view information about files that are backed up. The new version I am about to release even has the ability to view a diff on a particular file against any version that is in your backups. The features available give the feel of having your entire file system under version control. The version on the site is functional and we have been using it on our servers and work stations for years. I will have a new version available in a few weeks. The current version can be downloaded here from the link below. If you end up using it, I greatly appreciate any feedback you can provide. In regards to your original question, I don't know what is considered the "Arch" way of doing this, however I run Arch at work and at home and is backed up using mime on a daily or weekly frequency (this has saved my butt more than once). http://code.google.com/p/mime-backup/ Thank you Squall
On 06/20/2012 07:46 AM, Squall Lionheart wrote:
I wrote a system backup program called "mime" that works similar to Apple's Time Machine on the back end. Basically each time you backup your system, another copy of your file system is available. Another program is installed with it called "lsmime" which is used to list, restore and view information about files that are backed up. The new version I am about to release even has the ability to view a diff on a particular file against any version that is in your backups. The features available give the feel of having your entire file system under version control.
The version on the site is functional and we have been using it on our servers and work stations for years. I will have a new version available in a few weeks. The current version can be downloaded here from the link below. If you end up using it, I greatly appreciate any feedback you can provide.
In regards to your original question, I don't know what is considered the "Arch" way of doing this, however I run Arch at work and at home and is backed up using mime on a daily or weekly frequency (this has saved my butt more than once).
http://code.google.com/p/mime-backup/
Thank you Squall
Squall, very nice work going to give this a shot later today on a test box. Thanks for pointing this out. I tried a few things in suggestions to my OP but this seems to be the best so far. Thanks for bringing this back up
Squall, very nice work going to give this a shot later today on a test box. Thanks for pointing this out. I tried a few things in suggestions to my OP but this seems to be the best so far. Thanks for bringing this back up
Your welcome. I will post a message to everyone when I roll out my next version since it's a huge improvement over the current one with a lot of very powerful and user friendly features, as well as efficiency improvements. Enjoy -- Yesterday is history. Tomorrow is a mystery. Today is a gift. That's why its called the present. Headmaster Squall :: The Wired/Section-9 Close the world txen eht nepo $3R14L 3XP3R1M3NT$ #L41N http://twitter.com/headmastersqual
On 06/20/12 at 09:28am, Squall Lionheart wrote:
Your welcome. I will post a message to everyone when I roll out my next version since it's a huge improvement over the current one with a lot of very powerful and user friendly features, as well as efficiency improvements.
It'd be nice if you added it to the AUR. M
It'd be nice if you added it to the AUR.
M
After I roll out this update, that's on my list of stuff to figure out :). I have never created an AUR package, doesn't sound to difficult. Squall -- Yesterday is history. Tomorrow is a mystery. Today is a gift. That's why its called the present. Headmaster Squall :: The Wired/Section-9 Close the world txen eht nepo $3R14L 3XP3R1M3NT$ #L41N http://twitter.com/headmastersqual
But that's problematic for files in /etc, many of which require specific ownership or mode bits set/unset. You don't want your VCS to elide the fact that /etc/shadow should only be readable by root, for instance.
I don't think Git will change permissions on existing files in your working directory, but if you ever cloned your /etc repo onto another machine, the permissions would be screwed up. Yeah i agree, you should have carefull with that. I never transplant a the /etc of a machine i only use it as detailed backup. Whenever i have another machine i create a new /etc repository for it, copying manually the files i need because i dont even want the history back. You could do it with git using root and those files would be owned by root, you just need to adjust the ownerwhip like you set when you copy a file manually. In my home repo i do that and keep a branch for each computer, but on /etc repos not just because
Excerpts from Taylor Hedberg's message of Thu Feb 23 17:36:00 +0100 2012: they often have very diferent distros and objectives.
On 02/23/2012 08:56 AM, Alfredo Palhares wrote:
But that's problematic for files in /etc, many of which require specific ownership or mode bits set/unset. You don't want your VCS to elide the fact that /etc/shadow should only be readable by root, for instance.
I don't think Git will change permissions on existing files in your working directory, but if you ever cloned your /etc repo onto another machine, the permissions would be screwed up. Yeah i agree, you should have carefull with that. I never transplant a the /etc of a machine i only use it as detailed backup. Whenever i have another machine i create a new /etc repository for it, copying manually the files i need because i dont even want the history back. You could do it with git using root and those files would be owned by root, you just need to adjust the ownerwhip like you set when you copy a file manually. In my home repo i do that and keep a branch for each computer, but on /etc repos not just because
Excerpts from Taylor Hedberg's message of Thu Feb 23 17:36:00 +0100 2012: they often have very diferent distros and objectives. Hey thanks again everyone I am going to start trying out both methods and see which I feel works best for me and will post which way I go.
participants (21)
-
Alfredo Palhares
-
Allan McRae
-
Andrew Hills
-
Arno Gaboury
-
Dennis Börm
-
Dennis Herbrich
-
Don deJuan
-
Gaetan Bisson
-
Geert Hendrickx
-
Gour
-
gt
-
Jan Steffens
-
Jesse Jaara
-
Kevin Chadwick
-
Leonid Isaev
-
Manolo Martínez
-
Matthew Monaco
-
Ralf Mardorf
-
solsTiCe d'Hiver
-
Squall Lionheart
-
Taylor Hedberg