Is it possible (without an application layer cache of requests) to prevent sending an HTTP request for the same resource multiple times when it's cachable? And if yes, how?
E.g. instead of
at time 0: GET /data (request#1)
at time 1: GET /data (request#2)
at time 2: received response#1 for request#1 // headers indicate that the response can be cached
at time 3: received response#2 for request#2 // headers indicate that the response can be cached
at time 0: GET /data (request#1)
at time 1: GET /data (will wait for the response of request#1)
at time 2: received response#1 for request#1 // headers indicate that the response can be cached
at time 3: returns response#1 for request#2
This would require that its possible to indicate to the browser that the response will be cachable before the response headers are read. I am asking if there is such a mechanism. E.g. with a preceding OPTIONS or HEAD request of some kind.
Depending on the browser the second request could be stalled and served if cachable, e.g. in Chromium for non range requests:
The cache implements a single writer - multiple reader lock so that only one network request for the same resource is in flight at any given time. https://www.chromium.org/developers/design-documents/network-stack/http-cache
Here an example where three concurrent requests result in only a single server call:
fetch('/data.json').then(async r => console.log(await r.json()));
fetch('/data.json').then(async r => console.log(await r.json()));;
setTimeout(() => fetch('/data.json').then(async r => console.log(await r.json())), 2000);
The subsequent request have 0B transferred and have the same random number, showing that only a single server call was made.
This behavior is not the same for e.g. Firefox:
An interesting question that comes to mind is what would happen when a request for a resource is made while a H2 push for that resource was initiated before but not yet finished.
For reproducing here the test code: https://gist.github.com/nickrussler/cd74ac1c07884938b205556030414d34