I've been timing some HTTP requests via the CLI using time
and tools such as wget
and curl
as follows:
/usr/bin/time -v wget --spider http://localhost/index
/usr/bin/time -v curl http://localhost/index 2>&1 > /dev/null
What I noticed is that when using curl
, I was getting similar response times as with wget
only on the first request, and much lower times on subsequent requests, as if the responses to curl
were served from cache and wget
were not.
After investigating I found out that when specifying --spider
, wget
issues a HEAD
request as appended below which could explain why the cache is bypassed with wget
:
Request
HEAD /index HTTP/1.0
User-Agent: Wget/1.12 (linux-gnu)
Accept: */*
Host: localhost
Connection: Keep-Alive
Response
HTTP/1.1 200 OK
Date: Mon, 28 Nov 2011 14:45:59 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Location: index.php
Vary: negotiate,Accept-Encoding
TCN: choice
X-Powered-By: PHP/5.3.2-1ubuntu4.10
Set-Cookie: SESS421aa90e079fa326b6494f812ad13e79=16oqmug3loekjlb1tlvmsrtcr2; expires=Wed, 21-Dec-2011 18:19:19 GMT; path=/
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Mon, 28 Nov 2011 14:45:59 GMT
Cache-Control: store, no-cache, must-revalidate
Cache-Control: post-check=0, pre-check=0
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8
Since I'm doing more advanced stuff (writing body and headers in separate files, posting data, saving cookie in jar...) I need to use curl
instead of wget
. Therefore I'm trying to emulate a HEAD
request with curl
.
I managed to send a HEAD
request with curl
as follows:
curl "http://localhost/index" --request "HEAD" -H "Connection: Keep-Alive" -0
Request
HEAD /index HTTP/1.0
User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
Host: localhost
Accept: */*
Connection: Keep-Alive
Response
HTTP/1.1 200 OK
Date: Mon, 28 Nov 2011 15:44:02 GMT
Server: Apache/2.2.14 (Ubuntu)
Content-Location: index.php
Vary: negotiate,Accept-Encoding
TCN: choice
X-Powered-By: PHP/5.3.2-1ubuntu4.10
Set-Cookie: SESS421aa90e079fa326b6494f812ad13e79=4001hcmhdbnkb9e2v8nok9lii1; expires=Wed, 21-Dec-2011 19:17:22 GMT; path=/
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Mon, 28 Nov 2011 15:44:02 GMT
Cache-Control: store, no-cache, must-revalidate
Cache-Control: post-check=0, pre-check=0
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8
Despite the request/response being seemingly OK, when I execute the above curl
command while sniffing with tcpdump
I can see that the server responds straight away, however my curl
command always stays hanging for exactly 15 seconds which is obviously a big issue since I'm trying to time my curl
command (FYI before I used to get a curl: (18) transfer closed with 3 bytes remaining to read
when the server was not handling HEAD
properly and was returning Content-Length: 3
without returning any content, but no everything looks OK).
I tried to play with the --max-time
and --speed-time
arguments to have curl
timeout immediately upon receiving the 200 OK
but it makes no difference.
Q: How can I send a HEAD
request with curl in a way that the curl command stops immediately upon receiving the response from the server?
Why don't you just use the -I
option?
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on a FTP or FILE file, curl displays
the file size and last modification time only