pythonurllibtry-except

Tweeking retry logic to download files in a list


I have a list of URLs (around 20) that I am iterating over a for loop to download the file (using URLLib) each of them is pointing to. I have this for loop inside a try-except block (for obvious reasons), which is inside a for loop that is essentially a logic for retry attempts (3 times). What I'd like to know is if there's a way to avoid downloading the whole list from the beginning, if something fails (and the except block catches it and retries).

So, now, if the loop has executed the download for 13 files, and at the 14th file, it encounters an error, instead of downloading all the files from the beginning, can I just try to re-download the 14th one and carry on?

Here's the code:

retry = 3
for r in range(retry):
    try:
        for i in urls:
            n = os.path.basename(i)
            urllib.request.urlretrieve(i, f'app/test/{n}') # Downloading the file and saving it at app/test with the file name n
    except Exception as e:
        if r < 2:
            print(f'Failed. Attempt # {r + 1}')
            continue
        else:
            print('Error encoutered at third attempt')
            print(e)
    break

Solution

  • You can swap the for loops:

    retry = 3
    for i in urls:
        for r in range(retry):
            try:
                n = os.path.basename(i)
                urllib.request.urlretrieve(i, f'app/test/{n}') # Downloading the file and saving it at app/test with the file name n
            except Exception as e:
                if r < 2:
                    print(f'Failed. Attempt # {r + 1}')
                else:
                    print('Error encoutered at third attempt')
                    print(e)
            else:
                print(f"Success: {n}")
                break