I have multiple URL's like 'https://static.nseindia.com/s3fs-public/2022-09/ind_prs01092022.pdf' and I want to loop through an array of these and download them to a local folder. I saw that I may need to use s3fs, but I am unsure what the bucket name should be. (download file using s3fs)
It appears the web server doesn't respond unless a user agent is among the request headers. The behavior is fairly common.
import requests
with requests.Session() as s:
s.get('https://static.nseindia.com/s3fs-public/2022-09/ind_prs01092022.pdf',
headers={'User-Agent': 'Python'} # or any non-empty string
)