Understanding timeouts in Varnish Cache

Varnish Software
4 min readOct 23, 2018


First of all, these timeouts are Varnish Cache parameters. This means that they are global to all requests and backend connections, and are either set on startup or runtime via `varnishadm`.

If you for example want to increase timeout_req to 5 seconds, you can either add “-p timeout_req=5” to the startup arguments in `/etc/sysconfig/varnish`, or run `varnishadm param.set timeout_req 5` (last one not persisted).

Client-side timeouts
The client side timeouts handles how long Varnish Cache will wait for a slow or abusive client. After a timeout is signalled, normally the client connection is simply closed down.
In the initial stages, we have:

timeout_idle (default: 5 seconds): How long we will wait from we `accept()` the connection, until the client must have sent us a non-whitespace character. (typically this will be the G in “GET /url HTTP/1.1”)

After the initial phase of the request handling has passed, two different timers starts to take effect:

send_timeout (default: 600 seconds) limits how long we maximum will work on a single request. For example if you do large downloads, or video files that are watched in real time, you will need to increase this to the length of the longest file.

idle_send_timeout (default: 60 seconds) limits how long we allow the client to sullenly refuse (via TCP receive window) that it does not want to receive any more data.

Backend timeouts

The backend timeouts decides how long Varnish Cache will wait for backend operations to complete.

connect_timeout (default: 3.5s) is how long we wait for a TCP connection to the backend to come up.

first_byte_timeout (default: 60s) limits how long the processing time of the backend may be. The first byte of the response must come down the TCP connection within this timeout.

between_bytes_timeout (default: 60s) limits how long we will wait between for two subsequent successful reads on the backend connection.

When any of these go off, Varnish Cache will log a FetchError log line to the shared memory log. If you have multiple backends and have configured retry/restart, a different backend can be asked. Otherwise this is a hard error, and the client will (usually) get a 503 Server Error (“guru meditation”) response.
Traditionally these timeouts have been set to lower values, which has led multiple sources on the Internet to suggest setting these three to 600 seconds to “solve problems.” This is a bad idea, because log entries written by the worker threads are only flushed back into the main shmlog when the thread finishes work. In essence you are blinding yourself, not seeing the requests with varnishlog until 10 minutes has passed.
The backend timeouts can also be set per-backend in VCL:

backend appserver1 {
.host = "";
.port = "80";
.between_bytes_timeout = 30s;

Piped connections
In addition to the client and backend timeouts already mentioned, there is also a special timeout for piped connections. Pipe mode in Varnish Cache is a way to tell Varnish Cache just to copy bytes between the client connection and the backend connection. This is for supporting HTTP extensions and protocols that Varnish Cache hasn’t got support for, or something that runs over HTTP but doesn’t really needs or fits with caching.
Websockets is a common example of HTTP traffic that needs to be piped through Varnish Cache.
pipe_timeout (default: 60 seconds) is like idle_send_timeout for piped connections. If there is no activity for this long, it shuts down the connection so the resources can be used elsewhere.
Other unrelated parameters
Some of the other parameters that looks to do something similar:

tcp_keepalive_time is there to be sure we expire internal state after a client leaves a waitinglisted request.

timeout_linger says how long a worker thread should keep the connection after a request has been processed, in case another shows up.

thread_pool_timeout says how long a worker thread should be kept running even if it isn’t necessary any more.

Adjusting these is almost never necessary, don’t worry about them.

Where does HTTP keep-alive fit in all this?
HTTP keepalive allows the client to indicate to the server that it wants the connection to be kept open, because it wants to reuse it for another request pretty soon.
This is a good idea, avoids an unnecessary 1.5*RTT connection delay and TCP Slow Start when transmitting a response. In HTTP 1.1 this is the default behaviour.
Varnish Cache does not have any explicit timeout handling of HTTP keepalive, except to close connections after processing if the “Connection: close” request header is set.
In the other cases timeout_idle starts ticking after the request handling is done, and that implements the keepalive timeout in Varnish Cache.

If you’re still in your learning curve with Varnish Cache, have a look at this webinar “10 Varnish Cache mistakes and how to avoid them” which might come in handy.



Varnish Software

Varnish Software is the world’s leading provider of open source web application acceleration software.