pythonweb-scrapinglxml

How do I scrape an https page?


I'm using a python script with 'lxml' and 'requests' to scrape a web page. My goal is to grab an element from a page and download it, but the content is on an HTTPS page and I'm getting an error when trying to access the stuff in the page. I'm sure there is some kind of certificate or authentication I have to include, but I'm struggling to find the right resources. I'm using:

page = requests.get("https://[example-page.com]", auth=('[username]','[password]'))

and the error is:

requests.exceptions.SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib

Solution

  • Adding verify=False to the GET request solves the issue.

    page = requests.get("https://[example-page.com]", auth=('[username]','[password]'), verify=False)