I have a Python script using requests module. It works on my desktop (Windows), and works when I run it manually on my VM (Ubuntu 14.04 / python 2.7.14). However, when the exact same command is scheduled as a CRON job on the same VM, it's failing.
The offending line seems to be:
index_response = requests.get(my_https_URL, verify=False)
The (slightly redacted) response is:
(<class 'requests.exceptions.SSLError'>, SSLError(MaxRetryError("HTTPSConnectionPool(host=my_https_URL, port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '_ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure'),))",),), <traceback object at 0x7f3d877ce5f0>)
HTTP URLs don't seem to be affected, and indeed some HTTPS URLs are working... (I think it's domains requiring SNI which are not working).
I've tried adding verify=False
, and even running the CRON script as sudo, but no update.
It originally failed when I was running it manually, but I then installed python2.7.14 (in addition to python 2.7.6), which resolved the manual issue. To add some further complexity, the script is being triggered by another one using exec
(the test runner runs various tests scripts including this one) - could this be related?
I didn't have this problem in urllib2, but would prefer not to move back from requests to urllib2 if I can help it...
As I can format here better now as an answer:
If might be that cron simply picks the wrong interpreter in your case. A simple solution is to provide the full path to it, you can find it with:
md@gw1:~$ type python2.7
python2.7 is /usr/bin/python2.7
md@gw1:~$
Now use inside your crontab:
0 12 * * * /usr/bin/python2.7 your_script.py
Alternatively use a so called shebang:
md@gw1:~$ cat your_script.py
#!/usr/bin/python2.7
print "hello"
md@gw1:~$ chmod +x your_script.py
md@gw1:~$ ./your_script.py
hello
md@gw1:~$
The first line in your_script.py and the chmod made it executeble and will use the correct interpreter.