The function runs and works perfectly fine for a small data set, but for a large data set, the following error is produced:
(URL is hidden in the red)
A 500 CORS error. As you can see from the code below, I've tried adding multiple response headers to the response (StreamingHttpResponse object) with a focus on extending the amount of time the function can run/execute/query before being timed out. I've also tried adding headers to handle the cross site issue which I don't believe is a problem in the first place because the function works perfectly fine with a small dataset:
def create_and_download_csv(self, request):
qs = self.filter_queryset(self.get_queryset())
serialized = self.serializer_class(qs, many=True)
# mapping csv headers to columns
headers = {
"column_header": "Column Header",
"column_header": "Column Header",
"column_header": "Column Header",
"column_header": "Column Header",
}
response = StreamingHttpResponse(
renderers.CSVStreamingRenderer().render(
serialized.data,
renderer_context={'header': headers.keys(), 'labels': headers}
),
content_type="text/csv",
)
response["Access-control-Allow-Headers"] = "*"
response["Connection"] = "keep-alive"
response["Content-Disposition"] = 'attachment; filename="status.csv"'
response["Access-Control-allow-origin"] = "*"
response["Keep-Alive"] = 200
response["Timeout"] = 100
print(response)
return response
Perhaps I'm placing the headers in the wrong place? Or could it be that the docker container the project runs on is where I should configure the timeout value? Please help if you have an idea.
Turns out the issue was the timeout set on Gunicorn and Kong which are used in the deployment/hosting/serving of the project. The request was timing out because the workers were killed off before they gave a valid response.