[arch-general] ssd trim using fstrim.service and fstrim.timer
hello. Per the Arch wiki SSD page, I just enabled sysctl fstrim.timer, and then rebooted. I did not "enable" fstab.service. Now fstrim.timer is loaded, and active (but "waiting") and fstrim.service is loaded, but inactive. And the time stamp file the wiki mentions has a "0" size. So, do I have to wait (A WEEK!) to see if it works, or can I somehow now run fstrim.service manually to at least get it done once? Note: I could just add "discard" to /etc/fstab, but wouldn't that wear out the SSD faster than periodic trimming? And yes, my SSD does support trimming.
On Sun, Dec 27, 2015 at 09:45:27PM -0500, Francis Gerund wrote:
Per the Arch wiki SSD page, I just enabled sysctl fstrim.timer, and then rebooted. I did not "enable" fstab.service. Now fstrim.timer is loaded, and active (but "waiting") and fstrim.service is loaded, but inactive. And the time stamp file the wiki mentions has a "0" size.
In /var/lib/systemd/timers? They all have zero size, it's their timestamp what matters.
So, do I have to wait (A WEEK!) to see if it works, or can I somehow now run fstrim.service manually to at least get it done once?
fstrim.service most likely ran on-boot, silently, so you haven't noticed. If you use systemd-journal, check it, otherwise just start fstrim.service w/o enabling it (or run its ExecStart cmdline).
Note: I could just add "discard" to /etc/fstab, but wouldn't that wear out the SSD faster than periodic trimming?
I don't know precise numbers, but IME none of those made a difference performace-wise. I'd say if SSD wear is a problem (i.e. if you estimate it within expected usage time of the device), just switch to a HDD. HTH, -- Leonid Isaev GPG fingerprints: DA92 034D B4A8 EC51 7EA6 20DF 9291 EE8A 043C B8C4 C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
I hope this doesn't sound stupid, but I'm totally new to systemd. And I am not familiar with systemd-journal. So, I did: systemctl start fstrim.service It seems to have worked. I got: systemctl status fstrim.service fstrim.service - Discard unused blocks Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static; vendor preset: disabled) Active: inactive (dead) since [redacted]; 3min 50s ago Process: 1200 ExecStart=/sbin/fstrim -a (code=exited, status=0/SUCCESS) Main PID: 1200 (code=exited, status=0/SUCCESS) So thanks for the reply. On Sun, Dec 27, 2015 at 10:18 PM, Leonid Isaev < leonid.isaev@jila.colorado.edu> wrote:
On Sun, Dec 27, 2015 at 09:45:27PM -0500, Francis Gerund wrote:
Per the Arch wiki SSD page, I just enabled sysctl fstrim.timer, and then rebooted. I did not "enable" fstab.service. Now fstrim.timer is loaded, and active (but "waiting") and fstrim.service is loaded, but inactive. And the time stamp file the wiki mentions has a "0" size.
In /var/lib/systemd/timers? They all have zero size, it's their timestamp what matters.
So, do I have to wait (A WEEK!) to see if it works, or can I somehow now run fstrim.service manually to at least get it done once?
fstrim.service most likely ran on-boot, silently, so you haven't noticed. If you use systemd-journal, check it, otherwise just start fstrim.service w/o enabling it (or run its ExecStart cmdline).
Note: I could just add "discard" to /etc/fstab, but wouldn't that wear out the SSD faster than periodic trimming?
I don't know precise numbers, but IME none of those made a difference performace-wise. I'd say if SSD wear is a problem (i.e. if you estimate it within expected usage time of the device), just switch to a HDD.
HTH, -- Leonid Isaev GPG fingerprints: DA92 034D B4A8 EC51 7EA6 20DF 9291 EE8A 043C B8C4 C0DF 20D0 C075 C3F1 E1BE 775A A7AE F6CB 164B 5A6D
Note: I could just add "discard" to /etc/fstab, but wouldn't that wear out the SSD faster than periodic trimming?
I don't know precise numbers, but IME none of those made a difference performace-wise. I'd say if SSD wear is a problem (i.e. if you estimate it within expected usage time of the device), just switch to a HDD.
HTH,
Using discard will cause less wear than using fstrim or not using TRIM at all. TRIM gives the SSD more freedom to do wear levelling. Anyway, wear isn't an issue on anything but low quality hardware. Even the consumer class drives like Samsung's EVO line are going to take 5+ years to burn through the P/E cycles with extremely heavy usage. You can keep track via the SMART data. Samsung drives have an entry with the number of P/E cycles so far and the percentage of remaining life and other vendors probably have a similar field (but perhaps not with the same transparency).
On 12/28/2015 12:31 PM, Daniel Micay wrote:
Anyway, wear isn't an issue on anything but low quality hardware. Even the consumer class drives like Samsung's EVO line are going to take 5+ years to burn through the P/E cycles with extremely heavy usage. You can keep track via the SMART data. Samsung drives have an entry with the number of P/E cycles so far and the percentage of remaining life and other vendors probably have a similar field (but perhaps not with the same transparency).
It's worth noting that most hard drive life estimates are very conservative. https://techreport.com/review/24841/introducing-the-ssd-endurance-experiment All of the drives in the test lasted far beyond the manufacturer's expected lifetime, and a 2 drives on to take over two *petabytes* of writes before dying. -- Anthony Mapes
On 28-12-2015 19:51, Anthony Mapes wrote:
It's worth noting that most hard drive life estimates are very conservative.
https://techreport.com/review/24841/introducing-the-ssd-endurance-experiment
All of the drives in the test lasted far beyond the manufacturer's expected lifetime, and a 2 drives on to take over two *petabytes* of writes before dying.
You should take that with a grain of salt, sure they "lasted" that long but they were completely dead afterwards. Also I'd say that they haven't properly tested data retention, flash can store data for less time when it is reaching end of life, so if you plan to use the drive until it dies make sure you do regular backups. -- Mauro Santos
Okay, thanks for the replies. Now, if backing up wasn't such a chore . . . On Mon, Dec 28, 2015 at 4:47 PM, Mauro Santos <registo.mailling@gmail.com> wrote:
On 28-12-2015 19:51, Anthony Mapes wrote:
It's worth noting that most hard drive life estimates are very
conservative.
https://techreport.com/review/24841/introducing-the-ssd-endurance-experiment
All of the drives in the test lasted far beyond the manufacturer's expected lifetime, and a 2 drives on to take over two *petabytes* of writes before dying.
You should take that with a grain of salt, sure they "lasted" that long but they were completely dead afterwards. Also I'd say that they haven't properly tested data retention, flash can store data for less time when it is reaching end of life, so if you plan to use the drive until it dies make sure you do regular backups.
-- Mauro Santos
participants (5)
-
Anthony Mapes
-
Daniel Micay
-
Francis Gerund
-
Leonid Isaev
-
Mauro Santos