varnish storage tuning

Darryl Dixon - Winterhouse Consulting darryl.dixon at winterhouseconsulting.com
Thu Jun 11 23:46:59 CEST 2009


Hi Mike,

Quite possibly the purge_url usage is causing you a problem. I assume this
is something that is being invoked from your VCL, rather than telnet-ing
to the administrative interface or by varnishadm?

My testing showed that with purge_url in the VCL, a 'purge record' was
created every time the rule was struck, and that record never seemed to be
removed, which meant that memory grew without bound nearly continuously
(new memory allocated for each new purge record). See the thread I started
here:
http://www.mail-archive.com/varnish-misc@projects.linpro.no/msg02520.html

Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 and
then restart the request. This solved the problem for me.

regards,
Darryl Dixon
Winterhouse Consulting Ltd
http://www.winterhouseconsulting.com




> We're using Varnish and finding that Linux runs the OOM killer on the
> large
> varnish child process every few days.  I'm not sure what's causing the
> memory to grow but now I want to tune it so that I know configuration is
> not
> an issue.
> The default config we were using was 10MB.  We're using a small 32-bit EC2
> instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so I
> changed the storage specification to
> "file,/var/lib/varnish/varnish_storage.bin,1500M".  I'd like to be able
> give
> varnish 8GB of disk but it complains about sizes larger than 2GB.  32-bit
> limitation?
>
> Side note: I couldn't find any good doc on the various command line
> parameters for varnishd.  The 2.0.4 src only contains a man page for vcl.
>  It would be nice to see a man page for varnishd and its options.
>
> We are using purge_url heavily as we update documents - this shouldn't
> cause
> unchecked grow though, right?  We aren't using regexps to purge.
>
>
>
> Attached is the /var/log/messages output from the oom-killer and here's a
> few lines for the lazy.  I can't grok the output.
>
> Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer:
> gfp_mask=0x201d2, order=0, oomkilladj=0
> [...snip...]
> Jun 11 15:35:02 (none) kernel: Mem-info:
> Jun 11 15:35:02 (none) kernel: DMA per-cpu:
> Jun 11 15:35:02 (none) kernel: CPU    0: Hot: hi:  186, btch:  31 usd:  94
> Cold: hi:   62, btch:  15 usd:  60
> Jun 11 15:35:02 (none) kernel: HighMem per-cpu:
> Jun 11 15:35:02 (none) kernel: CPU    0: Hot: hi:  186, btch:  31 usd:  26
> Cold: hi:   62, btch:  15 usd:  14
> Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0
> writeback:0 unstable:0
> Jun 11 15:35:02 (none) kernel:  free:1957 slab:1078 mapped:23
> pagetables:1493 bounce:13
> Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB
> high:5160kB active:355572kB inactive:346580kB present:739644kB
> pages_scanned:1108980 all_unreclaimable? yes
>
> Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972
>
> Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB
> high:2824kB active:497824kB inactive:495208kB present:995688kB
> pages_scanned:1537436 all_unreclaimable? yes
> Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0
>
> Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB
> 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB
> Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB
> 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB
> Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890,
> find
> 572160/581746, race 3+9
> Jun 11 15:35:02 (none) kernel: Free swap  = 0kB
>
> Jun 11 15:35:02 (none) kernel: Total swap = 917496kB
> _______________________________________________
> varnish-misc mailing list
> varnish-misc at projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-misc
>




More information about the varnish-misc mailing list