cp from internal SSD via USB 3 to external HDD is very slow
Hi, I am copying from an internal SATA SSD via USB 3 to an external SATA WD Blue, a 3.5" CMR HDD. The first part “cp -ai /mnt/winos7/” (see [*1]) took about 70 minutes for 188 GiB, almost all files are quite small. The second part “cp -ai /mnt/winos10/” (see [*1]) should copy about 100 GiB small files + about 740 GiB for a single vbox vdi file. The about 100 GiB small files were copied and about 310 GiB of the vdi file were also already copied, at 03:30, so for about 410 GiB it took about 9 hours. I think this is taking far too long. Does anyone have an idea? FWIW ^Z was an oversight. I meant to press Ctrl+Alt+T to open a terminal tab, and apparently pressed Ctrl+Z (German QWERTY keyboard). CPU Model: 6.191.5 "13th Gen Intel(R) Core(TM) i3-13100" Memory Size: 32 GB For top see [*2]. • rocketmouse@archlinux ~ $ uname -r; cat /proc/cmdline 6.14.2-arch1-1 BOOT_IMAGE=/boot/vmlinuz-linux root=/dev/disk/by-label/m1.archlinux ro ibt=off ipv6.disable=1 kvm.enable_virt_at_load=0 Regards, Ralf [*1] • root@archlinux /home/rocketmouse # date; cp -ai /mnt/winos7/{*txt,o10_share,winOS_2} /mnt/ua.fantec/w7_ipad_archive-winos7_not_included-2025-apr-14/winos7/; echo $?; date; cp -ai /mnt/winos10/ /mnt/ua.fantec/w10ipad_archive+winos10_is_included-2025-apr-14/; echo $?; date Mon 14 Apr 17:02:20 CEST 2025 0 Mon 14 Apr 18:13:45 CEST 2025 ^Z [1]+ Stopped cp -ai /mnt/winos10/ /mnt/ua.fantec/w10ipad_archive+winos10_is_included-2025-apr-14/ 148 Tue 15 Apr 02:56:03 CEST 2025 • root@archlinux /home/rocketmouse # fg 1 cp -ai /mnt/winos10/ /mnt/ua.fantec/w10ipad_archive+winos10_is_included-2025-apr-14/ [*2] top - 03:38:10 up 2 days, 17:21, 1 user, load average: 1.30, 1.26, 1.30 Tasks: 284 total, 1 running, 281 sleep, 2 d-sleep, 0 stopped, 0 zombie %Cpu(s): 1.8 us, 0.8 sy, 0.0 ni, 84.3 id, 12.9 wa, 0.1 hi, 0.1 si, 0.0 st MiB Mem : 31862.0 total, 316.4 free, 3401.6 used, 29130.6 buff/cache MiB Swap: 16383.0 total, 16382.5 free, 0.5 used. 28460.4 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 82310 rocketm+ 20 0 557740 60108 48756 S 10.3 0.2 0:18.00 roxterm 663 root 20 0 819996 107492 66704 S 7.0 0.3 17:40.51 Xorg 100095 rocketm+ 20 0 2587672 172912 117660 S 1.3 0.5 0:08.42 Isolated Web Co 87583 root 20 0 7440 6136 5516 D 1.0 0.0 14:05.38 cp 2078 rocketm+ 20 0 167504 41148 33812 S 0.3 0.1 4:47.97 parcellite 2697 rocketm+ 20 0 73.9g 806944 168776 S 0.3 2.5 11:00.17 evolution 96948 root 20 0 0 0 0 I 0.3 0.0 0:12.97 kworker/u32:4-events_unbound 100138 root 20 0 0 0 0 I 0.3 0.0 0:00.99 kworker/1:2-events 100195 rocketm+ 20 0 70.5g 163948 142808 S 0.3 0.5 0:00.67 WebKitWebProces 100326 root 20 0 0 0 0 I 0.3 0.0 0:01.75 kworker/u32:1-flush-8:80 100948 rocketm+ 20 0 12372 9836 7660 R 0.3 0.0 0:00.07 top 1 root 20 0 24012 15008 9320 S 0.0 0.0 0:26.64 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.23 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pool_workqu
On Tue, 2025-04-15 at 04:04 +0200, Ralf Mardorf wrote:
FWIW ^Z was an oversight. I meant to press Ctrl+Alt+T to open a terminal tab, and apparently pressed Ctrl+Z (German QWERTY keyboard). ^ A typo, the Y should read Z :D
Please ignore this thread here. Since there were / are issues with the mailing list, I moved it to the forums. https://bbs.archlinux.org/viewtopic.php?pid=2237153 I also send a request to the list owner. -------- Forwarded Message -------- From: Ralf Mardorf To: arch-general-owner Subject: Is the mailing list down? Date: 04/15/2025 04:27:15 AM Mailer: Evolution 3.56.1 Hi, there are no emails for 2025 in the mailing list archive. Was and is there no more traffic? Regards, Ralf When I wrote this email, even the archive for December 2024 was empty. At the moment I see more emails in the lists's April 2025 archive then in my inbox. The emails are also not in my ISP's spam folder.
Hi Ralf,
I am copying from an internal SATA SSD via USB 3 to an external SATA WD Blue, a 3.5" CMR HDD.
The first part “cp -ai /mnt/winos7/” (see [*1]) took about 70 minutes for 188 GiB, almost all files are quite small. The second part “cp -ai /mnt/winos10/” (see [*1]) should copy about 100 GiB small files + about 740 GiB for a single vbox vdi file. The about 100 GiB small files were copied and about 310 GiB of the vdi file were also already copied, at 03:30, so for about 410 GiB it took about 9 hours. I think this is taking far too long.
Does anyone have an idea?
No, though I tend to use rsync(1) on large copies which might need continuing, or checking that the copy is identical after an unmount/mount to ensure the media is being read. Have you checked what SMART data can be accessed on the drives, given the possible restrictions like USB?
%Cpu(s): 1.8 us, 0.8 sy, 0.0 ni, 84.3 id, 12.9 wa, 0.1 hi, 0.1 si, 0.0 st
12.9% wait is waiting for I/O. I'm guessing you have eight cores as 1/8 = 0.125 CPU. IOW, the cp(1) is I/O bound. You may find ‘vmstat 1’ interesting to monitor the number of blocks I/O'd, ‘bi’ and ‘bo’. And dstat -cd -Dtotal,sda1,sda2 handy to monitor the about of waiting, ‘wai’, and the data moved between block devices. -- Cheers, Ralph.
Hi Ralph, the thread is from April. Some coincidences cannot possibly be coincidences. Yesterday, my Windows 10 VM reported that there was no more space available. From my many external backup drives, I took two and deleted some very old backups, as well as copying a few very old backups from one drive to the other, which still amounted to hundreds of GiB. This went very quickly. Then I started a copy (# cp -ai) of the Windows 10 VM to one of the drives. In about 6 hours, not quite 1/3 has been copied, namely 230.4 G of 736.4 G. Either this SSD is damaged or, and this is my guess, it is connected to the PCIe to SATA converter and that is the culprit. Well, I'll copy everything from the SATA SSD to an NVMe (better not from the SSD, but from the backup drive ;) and then see if I'll replace the SSD and/or the converter. Or I try to get rid of the PCIe to SATA converter completely by replacing some of the around 900 GiB SSDs with way larger SSDs. Regards, Ralf PS: I followed hints from https://bbs.archlinux.org/viewtopic.php?pid=2237153 : $ cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-linux root=/dev/disk/by-label/m1.archlinux ro ipv6.disable=1 kvm.enable_virt_at_load=0 zswap.enabled=0 transparent_hugepage=never
Hi Ralf,
the thread is from April.
It arrived here today. Here's a summary of the Received fields. It seems the ‘131d’ delay was within archlinux.org. Tue 15 Apr 2025 03:04:38 +00:00:00 from [127.0.0.1] (localhost [127.0.0.1]) by fews02-sea.riseup.net (Postfix) with ESMTPSA id 4Zc6sy3c2qzFs33 for <arch-general@lists.archlinux.org> +00:00:00 from fews02-sea.riseup.net (fews02-sea-pn.riseup.net [10.0.1.112]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx1.riseup.net (Postfix) with ESMTPS id 4Zc6sy73yMzDrk6 for <arch-general@lists.archlinux.org> 03:04:44 +00:00:06 from mx1.riseup.net (mx1.riseup.net [198.252.153.129]) by lists.archlinux.org (Postfix) with ESMTPS id AEABA4CE2CC8 for <arch-general@lists.archlinux.org> Sun 24 Aug 18:55:50 +15:51:12+131d from [95.217.236.249] (localhost [IPv6:::1]) by lists.archlinux.org (Postfix) with ESMTP id 380855A491BE 18:55:58 +15:51:20+131d from lists.archlinux.org (lists.archlinux.org [IPv6:2a01:4f9:c010:9eb4::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mailwash54.pair.com (Postfix) with ESMTPS id C944D164952 for <ralph@inputplus.co.uk> -- Cheers, Ralph.
participants (2)
-
Ralf Mardorf
-
Ralph Corderoy