On Tue, Sep 8, 2009 at 05:49, Dwight Schauer<dschauer@gmail.com> wrote:
Dear fellow Archers,
I tarred up a couple filesystems and piped the tar stream through ssh to a remote computer where I dd'ed it to a file. This a common backup method I've been using for a few years now if I'm going to wipe a system and start over.
I'm using JFS on the arch linux system that was being copied to.
The resulting file ended up being 137G (which is about right based on the source filesystem usage).
du --human --total a4b4.tar 137G a4b4.tar 137G total
However, I can only restore from 63G of the tar ball, so I attempted to see how much could be read.
dd if=a4b4.tar of=/dev/null dd: reading `a4b4.tar': Input/output error 123166576+0 records in 123166576+0 records out 63061286912 bytes (63 GB) copied, 1193.69 s, 52.8 MB/s
There were no critical files in that tar ball that are not kept elsewhere, that is not the issue. At this point I can consider what is past the 63G point in the tarball to unrecoverable, which is fine.
I tried skipping the first 63GB, but that does not work.
dd if=a4b4.tar skip=123166576 of=/dev/null dd: reading `a4b4.tar': Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 27.2438 s, 0.0 kB/s
It seems like it took a while to figure out that it could not perform this operation.
The box in question is running an OpenVZ patched 2.6.27 kernel, but that might not have anything to do with it.
Yeah, I know, I could have used bzip and made 2 separate files, I could have used rsync -av, I could have checked tarball before wiping the source files systems, etc, that is not the point here. Now that I know that JFS on my setup has a 63GB file size limit, I know now to accommodate for that in the future.
I'm mainly just curious on how the system could write a larger file than it can read.
Dwight
I just checked, and the max file size in JFS is 4 petabytes. Seems like you have another problem. -- Anders Bergh