calling for a new workspace sizing concept - Re: auto-tune ws / headroom poc

Nils Goroll slink at schokola.de
Tue Jan 30 11:17:36 UTC 2018


Hi,

we've run into a Workspace overflow in VFP_Push() and basically wasted a couple
of hours on exactly the same thing which I had raised almost two years ago:

workspace_backend is by far not the amount of space available for VCL and it is
anything but obvious to users that it also needs to accommodate the following:
(size estimates are for default parameters on 64bit)

	- bereq0, bereq, beresp:

	  3 * HTTP_estimate(cache_param->http_max_hdr)
	  = 3 * (sizeof (struct http)
	    + sizeof(txt) * cache_param->http_max_hdr
	    + cache_param->http_max_hdr)

	  ~ 3 * (56 + 16 * 64 + 64)

	- cache_param->vsl_buffer  4k

	- cache_param->http_resp_size 32k

So, for the default parameters, ~40k of our 64k are already planned to be used
by functions unrelated to VCL, the backend request and VFPs.

If users tune up any of the three parameters http_max_hdr, vsl_buffer and
http_resp_size, they are very likely to run into out-of-workspace conditions and
start getting backend 503s.

I'm tempted to docfix this, but even telling users the HTTP_estimate formula
seems just plain wrong.

Similar problems exists on the client side.

IMHO, we really should replace the concept of tuning workspaces by tuning
headroom to be available for VCL and other dynamic allocations (like VFPs).

As my previous attempt to tackle this did not raise much attention, I would want
to avoid wasting time on patches before reaching some kind of agreement on this
suggestion or any other sensible solution:

* forbid any nested sizing parameters as in "if you tune knob a, you also need
  to tune knob b" -> knob b should be replaced by a knob b' which is independent
  of knob a and any derivation should be calculated automatically.

* make workspace sizes read only parameters which users can query to estimate
  their memory consumption

* replace the workspace tunables with xxx_headroom tuning just the space
  to remain available on the workspace after deduction of all known allocations


Thanks, Nils


On 29/02/16 13:49, Nils Goroll wrote:
> Hi,
> 
> following up an attempt to discuss this on irc:
> 
> 
> The following discussion is, for the time being, regarding workpace_backend
> only, but the question is relevant for the other workspaces also:
> 
> Relevant portions of workspace_backend are not available for use in VCL, but are
> consumed in core code. Setting workspace_backend too low will trigger an
> immediate assertion failure in VBO_GetBusyObj().
> 
> IMHO, it doesn't make sense to handle these assertions differently, because
> space for http headers, vsl and the initial read from the backend are mandatory
> for varnish to issue backend requests.
> 
> Instead of trying to better handle nonsense workspace_backend sizes, I'd suggest
> to make it impossible to tune it stupidly in the first place - and to give
> admins a tunable which is easier to understand.
> 
> The attached patch contains a PoC (this is _not_ a finished patch, see below)
> suggesting the following changes:
> 
> - make workspace_backend a protected (read-only) parameter
> 
> - add headroom_backend as a new admin-tunable parameter which roughly equals to
>   "workspace available to VCL"
> 
> - calculate the size of workspace_backend based on all other relevant parameters
> 
> Notes on the patch:
> 
> - I've kept the workspace parameter as protected so the mempool code can stay
>   as is (pointing to a single parameter for sizing)
> 
> - in VBO_Init() I've taken a lightweight approach at the consistency issue from
>   parameter changes, and I'd be curious to hear if others agree or think
>   that we need to make this safer.
> 
> - what is the best way to cross the border between the cache and mgt code?
> 
>   Originally I would have preferred to have the sizing code in cache_busyobj.c,
>   but then we'd need to make mgt_param visible there - which I didn't like.
> 
>   But what I have in the patch now doesn't look clean at all, I am really
>   unsure at this point.
> 
> Thx, Nils
> 
> 
> 
> _______________________________________________
> varnish-dev mailing list
> varnish-dev at varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
> 


More information about the varnish-dev mailing list