[pacman-dev] [PATCH] libalpm: download rates becoming negative

Morgan Adamiec morganamilo at archlinux.org
Mon Apr 19 09:26:17 UTC 2021



On 19/04/2021 05:53, Andrew Gregory wrote:
> On 04/18/21 at 11:31pm, morganamilo wrote:
>> When a download fails on one mirror a new download is started on the
>> next mirror. This new download will have the initial size of whatever
>> has been downloaded so far, as well as the ammount downloaded reset to
>> 0.
>>
>> To account for this, when a download changes mirror, save how much has
>> been downloaded so far and add that to dlcb calls.
>> ---
>>  lib/libalpm/dload.c | 14 ++++++++++++--
>>  lib/libalpm/dload.h |  1 +
>>  2 files changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/libalpm/dload.c b/lib/libalpm/dload.c
>> index a4c42f8d..27a7748a 100644
>> --- a/lib/libalpm/dload.c
>> +++ b/lib/libalpm/dload.c
>> @@ -207,8 +207,8 @@ static int dload_progress_cb(void *file, curl_off_t dltotal, curl_off_t dlnow,
>>  
>>  	/* do NOT include initial_size since it wasn't part of the package's
>>  	 * download_size (nor included in the total download size callback) */
>> -	cb_data.total = dltotal;
>> -	cb_data.downloaded = dlnow;
>> +	cb_data.total = dltotal + payload->resetprogress;
>> +	cb_data.downloaded = dlnow + payload->resetprogress;
>>  	payload->handle->dlcb(payload->remote_name, ALPM_DOWNLOAD_PROGRESS, &cb_data);
>>  	payload->prevprogress = current_size;
>>  
>> @@ -440,6 +440,16 @@ static int curl_retry_next_server(CURLM *curlm, CURL *curl, struct dload_payload
>>  		fseek(payload->localf, 0, SEEK_SET);
>>  	}
>>  
>> +
>> +	/* reset progress for next server */
>> +	payload->resetprogress += payload->prevprogress - payload->initial_size;
>> +	payload->unlink_on_fail = 0;
> 
> Without looking at the rest of the patch, how does this line not just
> straight up break unlink_on_fail functionality for any download that
> falls back to another server?
> 

Honestly don't really know. I was using
e83e868a77865d42a33076605f9a90a165f7c93a as a base and it was done there.

I guess the point is that curl_check_finished_download() will set
unlink_on_fail = 1 in most situations, so we reset it to 0 on retry
knowing it may then be set to 1 again on the next check_finished.

Though I guess this breaks tempfile downloads which should always be
unlink_on_fail = 1.


Also looking at the code above:

if(payload->unlink_on_fail) {
	/* we keep the file for a new retry but remove its data 	if any */
	fflush(payload->localf);
	if(ftruncate(fileno(payload->localf), 0)) {
		RET_ERR(handle, ALPM_ERR_SYSTEM, -1);
	}
	fseek(payload->localf, 0, SEEK_SET);
}

I don't see why we'd want to wipe the file instead of trying to resume.

In fact I don't see the point of this variable at all. Why would we ever
not want to unlink on fail?


More information about the pacman-dev mailing list