I am trying to download an original image (png format) by url, convert it on the fly (without saving to disc) and save as jpg.
The code is following:
import os
import io
import requests
from PIL import Image
...
r = requests.get(img_url, stream=True)
if r.status_code == 200:
i = Image.open(io.BytesIO(r.content))
i.save(os.path.join(out_dir, 'image.jpg'), quality=85)
It works, but when I try to monitor the download progress (for the future progress bar) with r.iter_content() like this:
r = requests.get(img_url, stream=True)
if r.status_code == 200:
for chunk in r.iter_content():
print(len(chunk))
i = Image.open(io.BytesIO(r.content))
i.save(os.path.join(out_dir, 'image.jpg'), quality=85)
I get this error:
Traceback (most recent call last):
File "E:/GitHub/geoportal/quicklookScrape/temp.py", line 37, in <module>
i = Image.open(io.BytesIO(r.content))
File "C:\Python35\lib\site-packages\requests\models.py", line 736, in content
'The content for this response was already consumed')
RuntimeError: The content for this response was already consumed
So is it possible to monitor the download progress and after get the data itself?
When using r.iter_content()
, you need to buffer the results somewhere. Unfortunately, I can't find any examples where the contents get appended to an object in memory--usually, iter_content
is used when a file can't or shouldn't be loaded entirely in memory at once. However, you buffer it using a tempfile.SpooledTemporaryFile
as described in this answer: https://stackoverflow.com/a/18550652/4527093. This will prevent saving the image to disk (unless the image is larger than the specified max_size
). Then, you can create the Image
from the tempfile
.
import os
import io
import requests
from PIL import Image
import tempfile
buffer = tempfile.SpooledTemporaryFile(max_size=1e9)
r = requests.get(img_url, stream=True)
if r.status_code == 200:
downloaded = 0
filesize = int(r.headers['content-length'])
for chunk in r.iter_content(chunk_size=1024):
downloaded += len(chunk)
buffer.write(chunk)
print(downloaded/filesize)
buffer.seek(0)
i = Image.open(io.BytesIO(buffer.read()))
i.save(os.path.join(out_dir, 'image.jpg'), quality=85)
buffer.close()
Edited to include chunk_size
, which will limit the updates to occurring every 1kb instead of every byte.