In data martedì 8 maggio 2018 21:08:35 CEST, Carsten Mattner ha scritto:
Linux block layer's writeback system was supposed to fix this, but I've also noticed that the mechanism isn't perfect and you can still have a "hanging" application when doing the infamous USB-to-USB transfer that kills the VM subsystem.
Another way I can reproduce it is when there SSD-to-thumb-drive and you decide to some disk activity, too.
https://lwn.net/Articles/682582/
The problem is that VM gets pressured a lot and the whole construct fails in a way, while working as designed, manifesting as hanging programs.
Ok, but I'm talking about HDD on Sata bus. No Usb-to-usb transfers involved. And, as said before, I've altready tweaked the vm.dirty{writeback,background_ratio} to partialy word around this, either on Debian or Arch linux.
First, to confirm, if you manage to run the indexer like you would `ionice -c idle <the-indexer>`, and it shows less hangs, you know the issue is unfair I/O queuing.
The indexer's process autostarts and rapidly kills himself in a bunch of seconds... its impossible to renice the process, however I saw (by Ksysguard) it's always 19 as nice value... meaning high fairness both for the CPU, and I think for the IO too.
You can compare the block layer kernel configuration of Debian vs Arch.
Sorry, how to? Which are the keyword for searching?
You can try deadline or bfq schedulers. One is dead simple and the other optimizes for desktop responsiveness.
As a last chance, I'll look into these alternatives schedulers. But now, either Debian or Arch are using the same one: Cfq. So I don't think this is the cause of the problem. -- fp