Hi On Sun, Jun 9, 2013 at 5:53 PM, Pedro Emílio Machado de Brito < pedroembrito@gmail.com> wrote:
2013/6/9 Alfredo Palhares <masterkorp@masterkorp.net>:
Hello,
So I was creating a archlinux usb bootable drive:
[root@masterkorp-laptop Downloads]# dd bs=4M if=archlinux-2013.06.01-dual.iso of=/dev/sdb 130+1 records in 130+1 records out 548405248 bytes (548 MB) copied, 0.964976 s, 568 MB/s
I was like WOW, this was too fast! But nothing ever gets written to the pen drive. To add to the weird factor, a dd to dev/sdb1 (partition) works as it should, slowly. But then ofcourse the iso gets unbootable.
The md5sum on the iso is correct. I tried with diferent pen drives.
Please, any suggestions is welcome.†mo
I've been having this sort of problems with removable storage lately (copying multiple GBs of songs in a few seconds, except not really). The workaround I found out is to run the sync command after copying.
"sync" is not a workaround, it is a right solution. Under the hood copying in linux works following way. Every time you read something from disk the file information will stay cached in memory region called "buffer cache". Next time you read the same information kernel it will be served from RAM, not from disk. This speedups the read operation a lot - reading from RAM ~100000 faster than reading from disk [1]. "buffer cache" is used for write operations as well. When you write to disk it is actually writes to to memory and operation reported as "finished". Moments later special process called "writeback" sends this data to disk. This trick also allows to speedup the process. Of course it supposes that underlying disk will not suddenly disappear (like in your case with USB pen). If you want to make sure that data is really written then you should do one of the following things: 1) Unmount device correctly. Instead of just pulling the USB pen you should do "umount YOUR_DEVICE_NAME". "umount" flushes all dirty blocks to the device. 2) Call "sync" (that flushes dirty buffers) and then plug/umount USB pen. 3) Call "dd" operation with "conv=fsync" flag, this tells that dd should not return until all data is written to the device. That could take some time as all your data is being actually written
to disk.
I believe this is related to the write cache, please let me know if you find a better solution to this.
When I first learned DD, to create bootable disks, 1M was the suggested size because you could manage to miswrite with something larger. The caution was: "Use 1M to restrict your speed so that it writes properly" which I never actually understood, but never had a problem with. Only recently have I seen the suggestion to use 4M, but always with 1M as the fallback option if it doesn't work.
This statement does not make sense to me. Larger block is better because you need to make less system calls. If large block miswrites data then it is a bug (in kernel or driver) and should be reported to kernel maillist. [1] http://highscalability.com/numbers-everyone-should-know