[arch-general] What dirs are good to put in a tmpfs?
Hi, I have 7 GiB of ram but I'm only using 300 MiB, so I thought I would put stuff into it to speed up my system. I installed anything-sync-daemon but I have no idea what to sync. Is there a tool that can tell which dirs are being used the most? I'd rather not guess and benchmark, I don't have the patience for that. What are common dirs that would benefit from being in ram?
[2013-07-24 21:27:11 -0600] Chris Moline:
I installed anything-sync-daemon but I have no idea what to sync.
General question: isn't the effect of that software exactly the same thing as increasing the vm.dirty_expire_centiseconds kernel parameter? Except maybe in the case of a given application calling sync() all the time, but then it usually has a good reason to do so (such as the written data being too important to be in cache during a power cut). -- Gaetan
On Thu, Jul 25, 2013 at 4:35 AM, Gaetan Bisson <bisson@archlinux.org> wrote:
[2013-07-24 21:27:11 -0600] Chris Moline:
I installed anything-sync-daemon but I have no idea what to sync.
General question: isn't the effect of that software exactly the same thing as increasing the vm.dirty_expire_centiseconds kernel parameter?
Except maybe in the case of a given application calling sync() all the time, but then it usually has a good reason to do so (such as the written data being too important to be in cache during a power cut).
-- Gaetan
Yeah, it shouldn't increase performance except where it's causing possible data loss by ignoring `fsync`/`fdatasync`. There's no need to "guess and benchmark" because the kernel is already managing this for you.
On 24 July 2013 23:27, Chris Moline <blackredtree@gmail.com> wrote:
Hi, I have 7 GiB of ram but I'm only using 300 MiB, so I thought I would put stuff into it to speed up my system. I installed anything-sync-daemon but I have no idea what to sync. Is there a tool that can tell which dirs are being used the most? I'd rather not guess and benchmark, I don't have the patience for that. What are common dirs that would benefit from being in ram? Hi Chris,
I typically mount my /tmp to a tmpfs, since it is really only temporary, that way I can avoid writes to my hard drive. I've also seen people mount their mozilla profiles to tmpfs for speedups in firefox. Otherwise I have a large 'scratch' space I use for my bioinformatics research, typically I have a tmpfs on our servers arount 12gigs. If you are doing a lot of temporary reading and writing, that might be worth looking into. Calvin Morrison
On Wed, 24 Jul 2013 21:27:11 -0600 Chris Moline <blackredtree@gmail.com> wrote:
Hi, I have 7 GiB of ram but I'm only using 300 MiB, so I thought I would put stuff into it to speed up my system.
A great way to utilize RAM is to run several VMs :) Also I wonder what is your chipset/RAM type, because on a typical desktop board with modern CPU and RAM modules, your memory size should be even for the optimal (dual-channel) performance. Or is your video memory shared?
I installed anything-sync-daemon but I have no idea what to sync. Is there a tool that can tell which dirs are being used the most? I'd rather not guess and benchmark, I don't have the patience for that. What are common dirs that would benefit from being in ram?
It all depends on your usage pattern. One procedure which really benefits from being done in RAM is building packages, especially large ones like gcc, glibc or qemu. In some circumstances, you'd want to store systemd journal and/or part of syslog log files in RAM. For example, HostAP (wireless authentication) daemon can log a lot. As a result, the journal grows dozens of MiB a day which quickly makes reading it off the disk rather painful. Since the journal cannot be fine tuned, I usually configure it to be volatile, and also tell syslog to write hostapd-related messages to e.g. /tmp/log/hostapd.log. For a regular desktop, people put in RAM ~/.mozilla, ~/.local/chromium (or whatever Chrome uses these days), etc. However, in my experience the resulting speedup is next to none and not worth the risk of data loss in case of a power failure or a system freeze... The truth is that on desktop machine which does not do virtualization, you don't need more than 2GiB or RAM. Remember, memory modules do consume power so it may be sensible to remove most of them. Cheers, -- Leonid Isaev GnuPG key: 0x164B5A6D Fingerprint: C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
On Thu, Jul 25, 2013 at 11:34:00AM -0400, Leonid Isaev wrote:
On Wed, 24 Jul 2013 21:27:11 -0600 Chris Moline <blackredtree@gmail.com> wrote:
<snip>
I installed anything-sync-daemon but I have no idea what to sync. Is there a tool that can tell which dirs are being used the most? I'd rather not guess and benchmark, I don't have the patience for that. What are common dirs that would benefit from being in ram?
It all depends on your usage pattern.
One procedure which really benefits from being done in RAM is building packages, especially large ones like gcc, glibc or qemu.
I generally do this by resizing my /tmp folder to be ~90% of my RAM (add size=6G to the options in fstab). It won't intrude on anything until you actually start using it, but be careful that you don't fill it. The system will start swapping like crazy if you fill it.
For a regular desktop, people put in RAM ~/.mozilla, ~/.local/chromium (or whatever Chrome uses these days), etc. However, in my experience the resulting speedup is next to none and not worth the risk of data loss in case of a power failure or a system freeze...
I use profile-sync-daemon, but mostly to prevent writes to my SSD. I don't think you'll get much performance gain from it, especially since the kernel does pretty aggressive read/write caching with any unused RAM.
The truth is that on desktop machine which does not do virtualization, you don't need more than 2GiB or RAM. Remember, memory modules do consume power so it may be sensible to remove most of them.
I'm not entirely sure this is what you'd want to do. The system will use the memory in one way or another. Unless you're _really_ concerned about power usage, it seems silly to take out RAM. --Sean
Daniel Micay <danielmicay@gmail.com> wrote:
There's no need to "guess andbenchmark" because the kernel is already managing this for you.
A great way to utilize RAM is to run several VMs :) Also I wonder what is your chipset/RAM type, because on a typical desktop board with modern CPU and RAM modules, your memory size should be even for the optimal (dual-channel) performance. Or is your video memory shared?
AMD A6-5400K APU DIMM DDR3 Synchronous, 2x8Gs I didn't see an option in my bios for shared memory so im assuming it's not, or that I'm blind
It all depends on your usage pattern.
I've installed a system monitor and I've found that the only time my disk io is high is when i'm running deluged. Would my torrents dir be a good candidate for tmpfs? It's rather larger than my torrents dir but is it possible to have the most intensive torrents put into it? Or is this unneccesary?
One procedure which really benefits from being done in RAM is building packages, especially large ones like gcc, glibc or qemu.
In some circumstances, you'd want to store systemd journal and/or part of syslog log files in RAM. For example, HostAP (wireless authentication) daemon can log a lot. As a result, the journal grows dozens of MiB a day which quickly makes reading it off the disk rather painful. Since the journal cannot be fine tuned, I usually configure it to be volatile, and also tell syslog to write hostapd-related messages to e.g. /tmp/log/hostapd.log.
I've put in ~/src/<specific-dirs> but logging doesn't apply to me.
For a regular desktop, people put in RAM ~/.mozilla, ~/.local/chromium (or whatever Chrome uses these days), etc. However, in my experience the resulting speedup is next to none and not worth the risk of data loss in case of a power failure or a system freeze...
I've installed profile-sync-daemon. I haven't noticed any improvement yet but I will keep using it for a month or so
The truth is that on desktop machine which does not do virtualization, you don't need more than 2GiB or RAM. Remember, memory modules do consume power so it may be sensible to remove most of them.
Interesting, I've always thought the more ram the better :P Thanks for your input guys.
On Thu, Jul 25, 2013 at 12:20 PM, Chris Moline <blackredtree@gmail.com> wrote:
I've installed a system monitor and I've found that the only time my disk io is high is when i'm running deluged. Would my torrents dir be a good candidate for tmpfs? It's rather larger than my torrents dir but is it possible to have the most intensive torrents put into it? Or is this unneccesary?
I misread the graph. I saw a big jump and thought it was putting io way up there but it's actually only 100K. The only perfomance problem I have is a very slow network connection when I'm running deluged but that's cause my internet cable plan sucks. So I guess I will just leave the ram alone.
[2013-07-25 12:28:59 -0600] Chris Moline:
very slow network connection when I'm running deluged
Look up "qos" (quality of service): Linux can be configured to prioritize sending small packets over larger ones. Small packets correspond almost exclusively to interactive connections... -- Gaetan
On Thu, Jul 25, 2013 at 5:39 PM, Gaetan Bisson <bisson@archlinux.org> wrote:
[2013-07-25 12:28:59 -0600] Chris Moline:
very slow network connection when I'm running deluged
Look up "qos" (quality of service): Linux can be configured to prioritize sending small packets over larger ones. Small packets correspond almost exclusively to interactive connections...
-- Gaetan
I have transmission set to use `tcp_lp` as `peer-congestion-algorithm` in the configuration file and it works quite well.
On Fri, Jul 26, 2013 at 8:35 AM, Daniel Micay <danielmicay@gmail.com> wrote:
On Thu, Jul 25, 2013 at 5:39 PM, Gaetan Bisson <bisson@archlinux.org> wrote:
[2013-07-25 12:28:59 -0600] Chris Moline:
very slow network connection when I'm running deluged
Look up "qos" (quality of service): Linux can be configured to prioritize sending small packets over larger ones. Small packets correspond almost exclusively to interactive connections...
-- Gaetan
I have transmission set to use `tcp_lp` as `peer-congestion-algorithm` in the configuration file and it works quite well.
Deluged doesn't seem to have a similar option. But I'm going with Gaetan's suggestion to use traffic shaping. It's a bit of reading since I haven't done much with iptables and tc but I will get there.
participants (6)
-
Calvin Morrison
-
Chris Moline
-
Daniel Micay
-
Gaetan Bisson
-
Leonid Isaev
-
Sean Greenslade