I'm using a shared EMR cluster with Jupyterhub installed. If my cluster is under heavy load, I get an error How do I increase the timeout for a spark application from 60 seconds to something greater like 900 seconds (15 mins)?
I've found the correct file to adjust the timeout.
/etc/jupyter/conf/config.json
"livy_session_startup_timeout_seconds": 900
Now the timeout is set to 900 seconds vs 60 before.