pythonpython-requestsweb-crawler

Python-Requests (>= 1.*): How to disable keep-alive?


I'm trying to program a simple web-crawler using the Requests module, and I would like to know how to disable its -default- keep-alive feauture.

I tried using:

s = requests.session()
s.config['keep_alive'] = False

However, I get an error stating that session object has no attribute 'config', I think it was changed with the new version, but i cannot seem to find how to do it in the official documentation.

The truth is when I run the crawler on a specific website, it only gets five pages at most, and then keeps looping around infinitely, so I thought it has something to do with the keep-alive feature!

PS: is Requests a good module for a web-crawler? is there something more adapted?

Thank you !


Solution

  • This works

    s = requests.session()
    s.keep_alive = False
    

    Answered in the comments of a similar question.