I am intermittently getting the following error back from the Splunk API (about 40% of the time search works as expected):
HTTP 503 Service Unavailable -- Search not executed: This search could not be dispatched because the role-based disk usage quota of search artifacts for user "[REDACTED]" has been reached (usage=1067MB, quota=1000MB). Use the [[/app/search/job_manager|Job Manager]] to delete some of your search artifacts, or ask your Splunk administrator to increase the disk quota of search artifacts for your role in authorize.conf., usage=1067MB, quota=1000MB, user=[REDACTED], concurrency_category="historical", concurrency_context="user_instance-wide"
The default ttl for a search in the splunk api is 10 min (at least for my company). I am told I need to lower the TTL for my searches (which are massive) and I will stop running out of space. I do not have admin access, so no ability to increase my space or clear space on the fly (as far I know). I can find code on how to lower TTL using saved searches, but I use oneshot searches. It is not reasonable for me to switch.
How do I lower ttl for oneshot searches? Here is what I have now that does not seem to lower TTL:
#setup splunk connection
service = client.connect(
host=HOST,
port=PORT,
username=suser,
password=spass,
autologin=True,
)
#setup arguments
kwargs_oneshot = {"count" : "0",
"earliest_time": begin,
"latest_time": end,
"set_ttl":60
}
#setup search job
oneshotsearch_results = service.jobs.oneshot(query, **kwargs_oneshot)
# Get the results and display them using the ResultsReader
reader = results.ResultsReader(oneshotsearch_results)
Rather than set_ttl
, I believe you need ttl
or timeout
. See https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fjobs
Also, consider making your searches less massive or running them less often.