From gojomo at archive.org Tue Mar 1 01:09:04 2011 From: gojomo at archive.org (Gordon Mohr) Date: Mon, 28 Feb 2011 16:09:04 -0800 Subject: Practical VCL limits; giant URL->backend map Message-ID: <4D6C3920.6030708@archive.org> The quite-possibly-nutty idea has occurred to me of auto-generating a VCL that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of what are effectively 621 backend locations. (The mapping is essentially arbitrary.) Essentially, it would be replacing a squid url_rewrite_program. Am I likely to hit any hard VCL implementation limits (in depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) if my VCL is ~100-200MB in size? Am I overlooking some other more simple way to have varnish consult an arbitrary mapping (something similar to a squid url_rewrite_program)? Thanks for any warnings/ideas. - Gordon From perbu at varnish-software.com Tue Mar 1 08:45:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 1 Mar 2011 08:45:03 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On Tue, Mar 1, 2011 at 1:09 AM, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) Wow. > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? Yeah. Take a look at the DNS director. Just put your backends in a zone, point Varnish at it and Bob's your uncle. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From pom at dmsp.de Tue Mar 1 09:09:35 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Tue, 01 Mar 2011 09:09:35 +0100 Subject: Release date varnish 3.0? Message-ID: <4D6CA9BF.3060406@dmsp.de> Hi all, I am curious and maybe I missed something, but is there a planned release date for varnish 3.0? Have a nice tuesday, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From phk at phk.freebsd.dk Tue Mar 1 09:53:05 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 01 Mar 2011 08:53:05 +0000 Subject: Release date varnish 3.0? In-Reply-To: Your message of "Tue, 01 Mar 2011 09:09:35 +0100." <4D6CA9BF.3060406@dmsp.de> Message-ID: <12068.1298969585@critter.freebsd.dk> In message <4D6CA9BF.3060406 at dmsp.de>, Stefan Pommerening writes: >Hi all, > >I am curious and maybe I missed something, but is there a planned = >release date for varnish 3.0? It depends a lot on how much we can get people to test the code first. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Tue Mar 1 10:36:47 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 01 Mar 2011 10:36:47 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> (Gordon Mohr's message of "Mon, 28 Feb 2011 16:09:04 -0800") References: <4D6C3920.6030708@archive.org> Message-ID: <87mxlfrxlc.fsf@qurzaw.varnish-software.com> ]] Gordon Mohr | Am I likely to hit any hard VCL implementation limits (in | depth-of-conditional-nesting, overall size, VCL compilation overhead, | etc.) if my VCL is ~100-200MB in size? It will probably take a while to compile. Somebody at VUG3 mentioned a 50MB VCL for similar reasons and it took a little bit to compile and he wanted some evil hacks to be able to distribute the compiled VCL due to that, but apart from that, I believe it worked well enough. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From jbooher at praxismicro.com Tue Mar 1 18:18:31 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Tue, 1 Mar 2011 12:18:31 -0500 Subject: Varnish Cache on multi account VPS Message-ID: Curious as to weather the varnish cache can be restricted to use on only one account in CPanel? Jeff Booher p 440.549.0049 | f: 440.549.5695 www.praxismicro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssm at redpill-linpro.com Wed Mar 2 00:07:05 2011 From: ssm at redpill-linpro.com (Stig Sandbeck Mathisen) Date: Wed, 02 Mar 2011 00:07:05 +0100 Subject: Varnish Cache on multi account VPS In-Reply-To: (Jeff Booher's message of "Tue, 1 Mar 2011 12:18:31 -0500") References: Message-ID: <87wrkio2xy.fsf@mavis.fnord.no> Jeff Booher writes: > Curious as to weather the varnish cache can be restricted to use on > only one account in CPanel? I think you may want to supply a bit more context for your question. It is hard to figure out exactly what you mean. -- Stig Sandbeck Mathisen Redpill Linpro AS From lampe at hauke-lampe.de Wed Mar 2 00:13:47 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 00:13:47 +0100 Subject: caching of restarted requests possible? Message-ID: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Hi. I have a virtual host "images.example.com" served from two backends: - a backend "archive" which contains the bulk of images on fast read-only storage - a backend "updates" holding additions and updates A request for /foo.jpg should check the update backend first, even if the image was previously cached from the archive backend. A 404 status from the update backend restarts the request and fetches the image from the archive backend or delivers a cached copy. My code so far is at: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl It basically does what I want, but because the update backend's 404 is not stored when vcl_fetch returns restart, it sends a backend query for every request. I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" For now, I use the original URL and Host: header as hash key and reduced the cache TTL. That works well enough, even though it produces more traffic on the backends than would be necessary. Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? Hauke From l.barszcz at gadu-gadu.pl Wed Mar 2 08:13:30 2011 From: l.barszcz at gadu-gadu.pl (=?UTF-8?B?xYF1a2FzeiBCYXJzemN6IC8gR2FkdS1HYWR1?=) Date: Wed, 02 Mar 2011 08:13:30 +0100 Subject: caching of restarted requests possible? In-Reply-To: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Message-ID: <4D6DEE1A.80609@gadu-gadu.pl> Hi, On 02.03.2011 00:13, Hauke Lampe wrote: > I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. > > I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" Check out patch attached to ticket http://varnish-cache.org/trac/ticket/412 which changes behavior to what you need. > Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? You can store data custom header in req, like req.http.X-My-State. req.* is accessible from every function in vcl, so you can store your state in there - it persists across restarts. -- ?ukasz Barszcz web architect / Pion Aplikacji Internetowych GG Network S.A ul. Kamionkowska 45 03-812 Warszawa tel.: +48 22 514 64 99 fax.: +48 22 514 64 98 gg:16210 Sp??ka zarejestrowana w S?dzie Rejonowym dla m. st. Warszawy, XIII Wydzia? Gospodarczy KRS pod numerem 0000264575, NIP 867-19-48-977. Kapita? zak?adowy: 1 758 461,10 z? - wp?acony w ca?o?ci. From thereallove at gmail.com Tue Mar 1 17:54:32 2011 From: thereallove at gmail.com (Dan Gherman) Date: Tue, 1 Mar 2011 11:54:32 -0500 Subject: Varnish, between Zeus and Apache Message-ID: Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 2 09:09:58 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 2 Mar 2011 09:09:58 +0100 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: My guess is that Zeus may also set other headers that identify it to the apache module, and somehow get stripped by Varnish. You should check that out. Otherwise another solution may be placing Varnish in front of Zeus, if that does not affect your cluster setup. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Dan Gherman Sent: Tuesday, March 01, 2011 5:55 PM To: varnish-misc at varnish-cache.org Subject: Varnish, between Zeus and Apache Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish at mm.quex.org Wed Mar 2 09:32:42 2011 From: varnish at mm.quex.org (Michael Alger) Date: Wed, 2 Mar 2011 16:32:42 +0800 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: <20110302083242.GA26131@grum.quex.org> On Tue, Mar 01, 2011 at 11:54:32AM -0500, Dan Gherman wrote: > I am confronting with this situation: I manage a Zeus load-balancer > cluster who has Apache as a webserver on the nodes in the backend. > When Zeus load-balances a connection to an Apache server or > Apache-based application, the connection appears to originate from the > Zeus machine.Zeus provide an Apache module to work around this. Zeus > automatically inserts a special 'X-Cluster-Client-Ip' header into each > request, which identifies the true source address of the request. > [...] > Is there a way to have a workaround, like that Apache module, so I can > then send to Apache the true source address of the request? My > error.log is flooded with the usual messages " Ignoring > X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine > 'node_ip' It sounds like Varnish is sending the headers it receives, but the Apache module only respects the X-Cluser-Client-IP header when it's received from a particular IP address(es). See if there's a way to configure the module to accept it from Varnish, i.e. as if Varnish is the load-balancer. There's probably some existing configuration which has the IP address of the Zeus load-balancer. From lampe at hauke-lampe.de Wed Mar 2 12:39:08 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 12:39:08 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D6E2C5C.9020704@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. That looks promising, thanks. I'll give it a try. I hadn't thought of using vcl_hit to restart the request, either. That might solve the crash I encountered with restarting from within vcl_deliver. Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk1uLFYACgkQKIgAG9lfHFMeEgCfTIfBp9FzLUjj7uPDrgkSfleo q9MAn2Efxy7kmRb3uMN560zjSsih2nob =mejt -----END PGP SIGNATURE----- From l at lrowe.co.uk Wed Mar 2 15:08:03 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 2 Mar 2011 14:08:03 +0000 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On 1 March 2011 00:09, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) > > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? > > Thanks for any warnings/ideas. With that many entries, I expect you'll find that configuration will be quite slow, as there are no index structures in VCL and it compiles down to simple procedural C code. I think you'd be better off taking the approach of integrating with an external database library for the lookup. This blog pos shows how to search for values in an xml file http://www.enrise.com/2011/02/mobile-device-detection-with-wurfl-and-varnish/ but I expect you'll see better performance using sqlite or bdb. Laurence From romain.ledisez at netensia.fr Wed Mar 2 17:46:22 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Wed, 02 Mar 2011 17:46:22 +0100 Subject: Varnish burns the CPU and eat the RAM Message-ID: <1299084382.2658.200.camel@romain.v.netensia.net> Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ -------------- next part -------------- backend customer1 { .host = "customer1.hoster.net"; .port = "80"; } sub vcl_recv { # # Normalisation des URL # # Normaliser les URL envoy?s par curls -X et LWP if( req.url ~ "^http://" ) { set req.url = regsub(req.url, "http://[^/]*", ""); } # Normaliser l'h?te (domain.tldx -> www.domain.tld) if( req.http.host == "customer1.com" || req.http.host ~ "^(www\.)?customer1\.net$" ) { set req.http.redir = "http://www.customer1.com" req.url; error 750 req.http.redir; } # # Configuration des sites # # R?gles sp?cifiques ? Customer1 if( req.http.host == "www.customer1.com" ) { set req.backend = customer1; # Supprimer l'ent?te Cookie envoy?e par le navigateur remove req.http.Cookie; # OK pour le moment (voir quand la version mobile sera OK) remove req.http.Accept; remove req.http.Accept-Language; remove req.http.Accept-Charset; remove req.http.User-Agent; } # # R?gles g?n?riques adapt?es ? tous les sites # # P?riode de gr?ce : continue de servir le contenu apr?s son expiration du cache # (par exemple le temps de refaire la requ?te vers le backend ou de le deplanter) set req.grace = 3600s; # Normaliser l'ent?te Accept-Encoding if( req.http.Accept-Encoding ) { if( req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|mp4|flv)$" ) { # Ne pas compresser les fichiers d?j? compress?s remove req.http.Accept-Encoding; } elsif( req.http.Accept-Encoding ~ "gzip" ) { set req.http.Accept-Encoding = "gzip"; } elsif( req.http.Accept-Encoding ~ "deflate" ) { set req.http.Accept-Encoding = "deflate"; } else { # Supprimer les algorithmes inconnus remove req.http.Accept-Encoding; } } # Purger l'URL du cache si elle se termine par le param?tre purge if( req.url ~ "~purge$" ) { # Suppression du suffixe "~purge" puis purge de l'URL set req.url = regsub(req.url, "(.*)~purge$", "\1"); purge_url( req.url ); } } # # Appell? apr?s rec?ption de la r?ponse du backend # sub vcl_fetch { # Supprimer l'ent?te Set-Cookie envoy?e par le serveur remove beresp.http.Set-Cookie; } # # Appell? avant l'envoi d'un contenu du cache # sub vcl_deliver { remove resp.http.Via; remove resp.http.X-Varnish; remove resp.http.Server; remove resp.http.X-Powered-By; remove resp.http.P3P; } # # "Catching" des erreurs # sub vcl_error { if( obj.status == 750 ) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } -------------- next part -------------- client_conn 136529 117.80 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 136532 117.80 Client requests received cache_hit 136531 117.80 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1 0.00 Cache misses backend_conn 1 0.00 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 1 0.00 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 7720 . N struct sess_mem n_sess 18446744073709551606 . N struct sess n_object 1 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 481 . N struct objectcore n_objecthead 481 . N struct objecthead n_smf 3 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 480 . N worker threads n_wrk_create 480 0.41 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 81 0.07 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 166 0.14 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 6 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 136429 117.71 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 136534 117.80 Total Sessions s_req 136534 117.80 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1 0.00 Total fetch s_hdrbytes 30885129 26648.08 Total header bytes s_bodybytes 41097938 35459.83 Total body bytes sess_closed 136538 117.81 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 4233973 3653.13 SHM records shm_writes 547445 472.34 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 46002 39.69 SHM MTX contention shm_cycles 1 0.00 SHM cycles through buffer sm_nreq 2 0.00 allocator requests sm_nobj 2 . outstanding allocations sm_balloc 8192 . bytes allocated sm_bfree 2574852096 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 1 0.00 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 136442 117.72 HCB Lookups without lock hcb_lock 1 0.00 HCB Lookups with lock hcb_insert 1 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1159 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From rdelsalle at gmail.com Wed Mar 2 18:17:07 2011 From: rdelsalle at gmail.com (Roch Delsalle) Date: Wed, 2 Mar 2011 18:17:07 +0100 Subject: Varnish & Multibrowser Message-ID: Hi, I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. (eg. if a div is hidden for Internet Explorer users) Will it cache it randomly or is will it be able to notice the difference ? Thanks, -- D-Ro.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Mar 2 18:26:05 2011 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Wed, 2 Mar 2011 09:26:05 -0800 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: <2FC7B8E9-2D6C-4B0E-BF8F-E32A8840F68B@develooper.com> On Mar 2, 2011, at 9:17, Roch Delsalle wrote: > I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference ? You have to add a token to the cache key based on "was this MSIE". (Or have the developers do it with CSS or JS instead ...) - ask From scaunter at topscms.com Wed Mar 2 19:55:38 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 2 Mar 2011 13:55:38 -0500 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Hello: You do not have return(lookup); in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. We (have to) assume this is a 64bit OS. -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Romain LE DISEZ Sent: March-02-11 11:46 AM To: varnish-misc at varnish-cache.org Subject: Varnish burns the CPU and eat the RAM Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ From nkinkade at creativecommons.org Thu Mar 3 01:40:34 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 19:40:34 -0500 Subject: return(lookup) in vcl_recv() to cache requests with cookies? Message-ID: This seems like one of those perennial questions where the required reply is RTFM or "review the list archives because it's been asked thousands of times", but for whatever reason, I can't find an answer to this aspect of caching requests with cookies. In the examples section of the 2.1 VCL reference (we're running 2.1.5) there is an example for how to force Varnish to cache requests that have cookies: http://www.varnish-cache.org/docs/2.1/reference/vcl.html#examples The instruction is to to return(lookup) in vcl_recv. However, I have found that that doesn't work for me. The only way I can seem to get Varnish 2.1.5 to cache a request with a cookie is to remove the Cookie: header in vcl_recv. Other docs I found also seem to indicate that return(lookup) should work. For example: http://www.varnish-cache.org/trac/wiki/VCLExampleCacheCookies#Cachingbasedonfileextensions There are also loads of other examples on the 'net that indicate that return(lookup) in vcl_recv should work. I though maybe it was cache control headers returned by the backend causing it not to cache, but I tried stripping all those out and it still didn't cache. Am I just missing something here, or is the documentation not fully complete? I don't necessarily want to strip cookies. I just want to cache, or not, based on some regular expression matching the Cookie: header sent by the client. Thanks, Nathan From cosimo at streppone.it Thu Mar 3 02:22:52 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 03 Mar 2011 12:22:52 +1100 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: On Thu, 03 Mar 2011 04:17:07 +1100, Roch Delsalle wrote: > Hi, > > I would like to know how Varnish would behave if a web page is different > depending on the browser accessing it. Varnish doesn't know that unless you instruct it to. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference It will cache regardless of the content of the page, but according to: 1) vcl_hash(), which defaults to URL + cookies I believe 2) HTTP Vary header of the backend response So basically you have to tell Varnish what you want, and possibly stay consistent between VCL and how your designers make different pages for different browsers. I tried to put together an example based on what we use: http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizeUserAgent YMMV, -- Cosimo From jbooher at praxismicro.com Thu Mar 3 02:28:24 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Wed, 2 Mar 2011 20:28:24 -0500 Subject: Varnish Cache on multi account VPS Message-ID: I have 5 sites on the VPS. I want to only use Varnish on one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkinkade at creativecommons.org Thu Mar 3 02:59:10 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 20:59:10 -0500 Subject: Varnish Cache on multi account VPS In-Reply-To: References: Message-ID: This may not be the only, or even the best, way to go about this, but the thing that immediately occurs to me is to wrap your VCL rules for vcl_recv() in something like: sub vcl_recv { if ( req.http.host == "my.varnish.host" ) { [do something] } } Nathan On Wed, Mar 2, 2011 at 20:28, Jeff Booher wrote: > I have 5 sites on the VPS. I want to only use Varnish on one. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From andy at suburban-glory.com Thu Mar 3 09:28:24 2011 From: andy at suburban-glory.com (Andy Walpole) Date: Thu, 03 Mar 2011 08:28:24 +0000 Subject: 403 error message Message-ID: <4D6F5128.8020707@suburban-glory.com> Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole */Create and do what is new, through and through/* -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 10:13:45 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 10:13:45 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Message-ID: <1299143625.2628.41.camel@romain.v.netensia.net> Hello Stefan, thanks for your attention. Le mercredi 02 mars 2011 ? 13:55 -0500, Caunter, Stefan a ?crit : > You do not have > > return(lookup); > > in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. As I understand, "return (lookup)" is automatically added because it is part of the default "vcl_recv", which is appended to the user vcl_recv. Nevertheless, I added it to the end of my "vcl_recv", it did not change the behaviour. > We (have to) assume this is a 64bit OS. You're right, it is a 64 bit CentOS. Greetings. -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From martin.boer at bizztravel.nl Thu Mar 3 11:01:36 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Thu, 03 Mar 2011 11:01:36 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <4D6F6700.5020604@bizztravel.nl> Hello Romain, Wat does happen when you limit the amount of memory/space used ? Say something like -s file,/mnt/varnish/varnish_storage.bin,7G Regards, Martin On 03/02/2011 05:46 PM, Romain LE DISEZ wrote: > Hello all, > > I'm pretty new to Varnish. I'm deploying it because one of our customer > is going to have a special event and the website is pretty slow (I'm > working for an Internet hosting company). We are expecting more than > 1000 requests per seconds. > > From what I read here and there, this should not be a problem for > Varnish. > > My problem is that when Varnish is using cache ("deliver", as opposed to > "pass"), the CPU consumption increases drasticaly, also the RAM. > > The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. > > > With a simple test like this (robots.txt = 300 bytes) : > ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt > > CPU consumption is fluctuating between 120% and 160%. > > Second point is that Varnish consumes all the memory. Trying to limit > that, I made a tmpfs mountpoint of 3G : > mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ > > But varnish continues to consume all the memory > > My configuration is attached to this mail. Varnish is launched like > this : > /usr/sbin/varnishd -P /var/run/varnish.pid > -a :80 > -f /etc/varnish/default.vcl > -T 127.0.0.1:6082 > -t 120 > -w 120,120,120 > -u varnish -g varnish > -S /etc/varnish/secret > -s file,/mnt/varnish/varnish_storage.bin,100% > -p thread_pools 4 > > I also tried to launch it with parameter "-h classic" > > It is installed on a Centos 5 up to date, with lastest RPMs provided by > the varnish repository. > > If I put a return (pass) in vcl_fetch, everything is fine (except the > backend server, of course). It makes me think, with my little knowledges > of Varnish, that the problem is in the delivering from cache. > > Output of "varnishstat -1", when running ab, is attached. > > Thanks for your help. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Thu Mar 3 14:10:40 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Thu, 3 Mar 2011 13:10:40 +0000 Subject: 403 error message In-Reply-To: <4D6F5128.8020707@suburban-glory.com> References: <4D6F5128.8020707@suburban-glory.com> Message-ID: <1870656644209048894@unknownmsgid> On 3 Mar 2011, at 08:29, Andy Walpole wrote: Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole *Create and do what is new, through and through* _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc 403 points at your backend telling varnish it is forbidden. If varnish is giving you that error it is working and the backend is giving it 403. I've seen this before if backend apps use some sort of rate limiting per ip as by default when you add varnish to an existing setup the ip that gets passed to the backend is the varnish ip not the source ip. You could try passing the ip as x-forwarded-for Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From camcima at hotmail.com Thu Mar 3 15:33:28 2011 From: camcima at hotmail.com (Carlos Cima) Date: Thu, 3 Mar 2011 11:33:28 -0300 Subject: Grace Message-ID: Hi, Is there any way to check if a particular request was answered "in grace" by sending a HTTP header? I'm trying to increase the grace period if the user-agent contains "Googlebot" in order to speed up crawling response time and consequently be better positioned in Google organic search results. When I access using Googlebot in the user-agent header I'm not sure if Varnish is waiting for a backend request or not. VCL excerpt: sub vcl_recv { ... # Set Grace if (req.http.user-agent ~ "Googlebot") { set req.grace = 12h; } else { set req.grace = 30m; } ... } sub vcl_fetch { ... # Set Grace set beresp.grace = 12h; ... } Thanks, Carlos Cima -------------- next part -------------- An HTML attachment was scrubbed... URL: From shib4u at gmail.com Thu Mar 3 17:23:41 2011 From: shib4u at gmail.com (Shibashish) Date: Thu, 3 Mar 2011 21:53:41 +0530 Subject: Cache dynamic URLs Message-ID: Hi All, My "varnishtop -b -i TxURL" shows... 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 9.99 TxURL /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 6.00 TxURL /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 5.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 3.99 TxURL /xyz.php?id=2182&status=1 3.00 TxURL /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 3.00 TxURL /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 2.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 How can i cache those dynamic pages in Varnish, say for 30 sec ? Thanks. -- Shib -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhalfmoon at milksnot.com Thu Mar 3 17:29:04 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Thu, 03 Mar 2011 17:29:04 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <1870656644209048894@unknownmsgid> References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> Message-ID: <4D6FC1D0.7020508@milksnot.com> Hiya, today I upgraded a few Varnish servers from v2.1.3 to v2.1.5. The machines are purring along nicely, but I did notice something curious on in the server's statistics: the backend_toolate is down from a very wobbly average of 20p/s too a constant 0.7p/s. Also , the object & object head count are way down. n_lru_nuked is also down from an average of 10p/s to zero. Hitrate is unaffected and performance is slightly up (a few percent less cpuload on high-traffic moments). This is no temporary effect, because I've seen it on another machine in the same cluster, which I upgraded a week ago. I did a quick comparison between 2.1.3 and 2.1.5 of varnishadm's 'param.show' and also a quick scan of the sourcecode of 2.1.3 & 2.1.5, but couldn't find any parameter defaults that might have been changed between versions. It's not causing any issues here, other that a bit more performance. I'm just curious: Does anybody know what's going on here? Cheers, Johhny -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-backendtoolate.jpg Type: image/jpeg Size: 25299 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-headcount.jpg Type: image/jpeg Size: 19799 bytes Desc: not available URL: From isharov at yandex-team.ru Thu Mar 3 17:35:23 2011 From: isharov at yandex-team.ru (Iliya Sharov) Date: Thu, 03 Mar 2011 19:35:23 +0300 Subject: Cache dynamic URLs In-Reply-To: References: Message-ID: <4D6FC34B.5010209@yandex-team.ru> Hi. May be sub vcl_hash { set req.hash +=req.url; return(hash); } sub vcl_fetch { if (req.url ~ "(php") { set beresp.ttl =30s;} } and -p lru_interval=30 in command-line run options? Wbr, Iliya Sharov 03.03.2011 19:23, Shibashish ?????: > Hi All, > > My "varnishtop -b -i TxURL" shows... > > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 9.99 TxURL > /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 6.00 TxURL > /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 5.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 > 3.99 TxURL /xyz.php?id=2182&status=1 > 3.00 TxURL > /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > > How can i cache those dynamic pages in Varnish, say for 30 sec ? > > Thanks. > > -- > Shib > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 17:57:11 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 17:57:11 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <4D6F6700.5020604@bizztravel.nl> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> Message-ID: <1299171431.2628.47.camel@romain.v.netensia.net> Hello Martin, Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : > Wat does happen when you limit the amount of memory/space used ? Say > something like > > -s file,/mnt/varnish/varnish_storage.bin,7G I did that : # free -m total used free Mem: 7964 156 7807 -/+ buffers/cache: 156 7807 # /etc/init.d/varnish start Starting varnish HTTP accelerator: [ OK ] # free -m total used free Mem: 7964 5044 2920 -/+ buffers/cache: 5044 2920 # ps uax | grep varnish /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 Even with a limit of 1G, it consumes 5G of RAM. Could it be related to the number of thread ? -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From thebog at gmail.com Thu Mar 3 18:11:10 2011 From: thebog at gmail.com (thebog) Date: Thu, 3 Mar 2011 18:11:10 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299171431.2628.47.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> <1299171431.2628.47.camel@romain.v.netensia.net> Message-ID: The storage you are assigning is the storage for objects, not memoryspace. When it comes to how much memory Varnish uses, it's assigned by the OS. There is a big difference of how much is uses and how much it's assigned by the OS (normally). Use the top command and read the difference between whats actually used and how much is reserved. Read: http://www.varnish-cache.org/docs/2.1/faq/general.html for more info around that. In short, Varnish is using modern OS technics to find the "right" balance, and therefore memory should not be an issue. The burning of CPU is not correct, but I don't have any good pointers there. Join the irc channel, and ask if someone there can help you out. YS Anders Berg On Thu, Mar 3, 2011 at 5:57 PM, Romain LE DISEZ wrote: > Hello Martin, > > Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : >> Wat does happen when you limit the amount of memory/space used ? Say >> something like >> >> -s file,/mnt/varnish/varnish_storage.bin,7G > > I did that : > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? ?156 ? ? ? 7807 > -/+ buffers/cache: ? ? ? ?156 ? ? ? 7807 > > # /etc/init.d/varnish start > Starting varnish HTTP accelerator: ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?] > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? 5044 ? ? ? 2920 > -/+ buffers/cache: ? ? ? 5044 ? ? ? 2920 > > # ps uax | grep varnish > /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 > > Even with a limit of 1G, it consumes 5G of RAM. Could it be related to > the number of thread ? > > > -- > Romain LE DISEZ > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ruben.ortiz at itnet.es Thu Mar 3 16:18:57 2011 From: ruben.ortiz at itnet.es (=?iso-8859-1?Q?Rub=E9n_Ortiz?=) Date: Thu, 3 Mar 2011 16:18:57 +0100 Subject: Varnish Thread_Pool_Max. How to increase? Message-ID: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Hello Firstable my Varnish version varnishd (varnish-2.0.4) Linux Kernel 2.6.18-028stab070.14 I'm really new to Varnish. I want to configure it in my way, tunning some parameters but I don't know how. I have this setup: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -w 100,2000 \ -s file,/var/lib/varnish/varnish_storage.bin,2G" Theorically, with -w 100,2000 I'm telling to Varnish to increase its defaults values for thread_pool_min,thread_pool_max and yes, when I check stats with param.show I can see the changes. Previously, I have reboted varnish daemon. But How can I increase thread_pool_max? I was able to change in admin console, but when I reboot service, the param backs to its default config (2) Thanks people! Rub?n Ortiz Administrador de Sistemas Edificio Nova Gran Via Av. Gran V?a, 16-20, 2? planta | 08902 L'Hospitalet de Llobregat (Barcelona) T 902 999 343 | F 902 999 341 www.grupoitnet.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: grupo-itnet.jpg Type: image/jpeg Size: 7808 bytes Desc: not available URL: From liulu2 at leadsec.com.cn Fri Mar 4 03:14:01 2011 From: liulu2 at leadsec.com.cn (=?gb2312?B?wfXCtg==?=) Date: Fri, 4 Mar 2011 10:14:01 +0800 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 Message-ID: <201103041014015137100@leadsec.com.cn> jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2011-03-04 liulu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 07:08:36 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 01:08:36 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Message-ID: Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 12408 8.19 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5549280 3662.89 Client requests received cache_hit 5543904 3659.34 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 5376 3.55 Cache misses backend_conn 5373 3.55 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 3 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 5373 3.55 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 798 . N struct sess_mem n_sess 548 . N struct sess n_object 5373 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 5806 . N struct objectcore n_objecthead 5806 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 0.53 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 0 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 74099 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5543132 3658.83 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 12407 8.19 Total Sessions s_req 5549280 3662.89 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 5373 3.55 Total fetch s_hdrbytes 1245394845 822042.80 Total header bytes s_bodybytes 13448598673 8876962.82 Total body bytes sess_closed 43 0.03 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 5549262 3662.88 Session Linger sess_herd 1431702 945.02 Session herd shm_records 162564756 107303.47 SHM records shm_writes 7031015 4640.93 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 138344 91.32 SHM MTX contention shm_cycles 60 0.04 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 10746 7.09 SMA allocator requests sma_nobj 10746 . SMA outstanding allocations sma_nbytes 17538168 . SMA outstanding bytes sma_balloc 17538168 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 3 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 1254 . SMS bytes allocated sms_bfree 1254 . SMS bytes freed backend_req 5373 3.55 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 5540904 3657.36 HCB Lookups without lock hcb_lock 5375 3.55 HCB Lookups with lock hcb_insert 5373 3.55 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1515 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) From tfheen at varnish-software.com Fri Mar 4 08:33:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:33:56 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <4D6FC1D0.7020508@milksnot.com> (Johnny Halfmoon's message of "Thu, 03 Mar 2011 17:29:04 +0100") References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> <4D6FC1D0.7020508@milksnot.com> Message-ID: <87d3m7e3vf.fsf@qurzaw.varnish-software.com> ]] Johnny Halfmoon | It's not causing any issues here, other that a bit more | performance. I'm just curious: Does anybody know what's going on here? It could be the automatic retry of requests when the backend closes the connection at us. See commits 19966c023f3bba30c32187a0c432c1711ac25201 and f7a5d684ef8fa5352f5fe6f5a28f6fe45f72deb1 regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From perbu at varnish-software.com Fri Mar 4 08:48:05 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:48:05 +0100 Subject: Cache dynamic URLs In-Reply-To: <4D6FC34B.5010209@yandex-team.ru> References: <4D6FC34B.5010209@yandex-team.ru> Message-ID: On Thu, Mar 3, 2011 at 5:35 PM, Iliya Sharov wrote: > Hi. > May be > sub vcl_hash > { > set req.hash > +=req.url; > > > return(hash); > } > This part isn't necessary. > > sub vcl_fetch > { > if (req.url ~ "(php") { set beresp.ttl =30s;} > } > It will work. > and -p lru_interval=30 in command-line run options? > This is also not relevant. I wouldn't recommend screwing around with parameters unless it is called for and you're know what you're doing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 4 08:50:37 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:50:37 +0100 Subject: Varnish Thread_Pool_Max. How to increase? In-Reply-To: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> References: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Message-ID: 2011/3/3 Rub?n Ortiz > But How can I increase thread_pool_max? I was able to change in admin > console, but when I reboot service, the param backs to its default config > (2) > Take a look at the init script. It will probably source /etc/sysconfig/varnish or /etc/default/varnish and you can set the startup parameters there. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 4 08:56:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:56:24 +0100 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 In-Reply-To: <201103041014015137100@leadsec.com.cn> (=?utf-8?B?IuWImA==?= =?utf-8?B?6ZyyIidz?= message of "Fri, 4 Mar 2011 10:14:01 +0800") References: <201103041014015137100@leadsec.com.cn> Message-ID: <878vwve2tz.fsf@qurzaw.varnish-software.com> ]] "??" Hi, | jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2.4 is quite old, 2.4.20 was released in 2002 so you should upgrade to something newer. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From steve.webster at lovefilm.com Fri Mar 4 11:14:18 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 10:14:18 +0000 Subject: Processing ESIs in parallel Message-ID: Hi, We've been looking at using Varnish 2.1.5 with ESIs to allow us to cache the bulk of our page content whilst still generating the user-specific sections dynamically. The sticking point for us is that some of these page sections cannot be cached. It seems, based on both observed behaviour and a quick look at the code for ESI_Deliver, that Varnish is processing and requesting content for the ESIs serially rather than in parallel. I know there has been a lot of work on ESIs for Varnish 3, but as far as I can tell they are still processed serially. Are there any plans to switch to a parallel processing model? If not, might this be a worthy feature request for a future version of Varnish? Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksorensen at nordija.com Fri Mar 4 11:56:13 2011 From: ksorensen at nordija.com (Kristian =?ISO-8859-1?Q?Gr=F8nfeldt_S=F8rensen?=) Date: Fri, 04 Mar 2011 11:56:13 +0100 Subject: Processing ESIs in parallel In-Reply-To: References: Message-ID: <1299236173.21671.17.camel@kriller.nordija.dk> On Fri, 2011-03-04 at 10:14 +0000, Steve Webster wrote: > Hi, > > We've been looking at using Varnish 2.1.5 with ESIs to allow us to > cache the bulk of our page content whilst still generating the > user-specific sections dynamically. The sticking point for us is that > some of these page sections cannot be cached. It seems, based on both > observed behaviour and a quick look at the code for ESI_Deliver, that > Varnish is processing and requesting content for the ESIs serially > rather than in parallel. I would like to see that feature in varnish as well. In our case we are including up to several hundred objects from a single document, and due to the nature of our data, chances are that if the first included ESI-object is a miss, then most of the remaining ESI-objects will be misses, so it would be great to be able to request some of the objects in parallel to speed up delivery. Regards Kristian S?rensen From phk at phk.freebsd.dk Fri Mar 4 12:07:24 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 04 Mar 2011 11:07:24 +0000 Subject: Processing ESIs in parallel In-Reply-To: Your message of "Fri, 04 Mar 2011 10:14:18 GMT." Message-ID: <61061.1299236844@critter.freebsd.dk> In message , Steve Webster w rites: >I know there has been a lot of work on ESIs for Varnish 3, but as far as I >can tell they are still processed serially. Are there any plans to switch to >a parallel processing model? If not, might this be a worthy feature request >for a future version of Varnish?s I wouldn't call them "plans", but it is on our wish-list. It is not simple though, so don't hold your breath. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From pom at dmsp.de Fri Mar 4 12:34:45 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Fri, 04 Mar 2011 12:34:45 +0100 Subject: varnishreplay question Message-ID: <4D70CE55.20306@dmsp.de> Hi all, I try to use varnishreplay for the first time. What I did is the following: - run varnishlog without any parameters and produce some quite big logfile on a production varnish - copy the log file (from varnishlog) from production to a testing machine (running varnish of course) - fiddle around with vcl to direct traffic to some standby backends - call varnishreplay 'varnishreplay -D -a localhost:80 -r ' Unfortunately a varnishstat on this testing machine does not show me any activity and my only output on console is: 0x7f3d703b5700 thread 0x7f3d704d4710:1701999465 started 0x7f3d703b5700 thread 0x7f3d703b3710:540291889 started 0x7f3d703b5700 thread 0x7f3d703ab710:678913378 started 0x7f3d703b5700 thread 0x7f3d703a3710:540357940 started 0x7f3d703b5700 thread 0x7f3d7039b710:540161076 started 0x7f3d703b5700 thread 0x7f3d70393710:540292660 started 0x7f3d703b5700 thread 0x7f3d7038b710:540292149 started 0x7f3d703b5700 thread 0x7f3d70383710:1701014383 started 0x7f3d703b5700 thread 0x7f3d7037b710:540292919 started 0x7f3d703b5700 thread 0x7f3d70373710:540162097 started 0x7f3d703b5700 thread 0x7f3d7036b710:825110816 started 0x7f3d703b5700 thread 0x7f3d6f28e710:1852796537 started 0x7f3d703b5700 thread 0x7f3d6f286710:540162100 started 0x7f3d703b5700 thread 0x7f3d6f27e710:540095033 started 0x7f3d703b5700 thread 0x7f3d6f276710:540292147 started 0x7f3d703b5700 thread 0x7f3d6f26e710:540094774 started 0x7f3d703b5700 thread 0x7f3d6f266710:540487985 started 0x7f3d703b5700 thread 0x7f3d6f25e710:1107959840 started 0x7f3d703b5700 thread 0x7f3d6f256710:540423988 started 0x7f3d703b5700 thread 0x7f3d6f24e710:540357431 started 0x7f3d703b5700 thread 0x7f3d6f246710:540488244 started 0x7f3d703b5700 thread 0x7f3d6f23e710:540356662 started 0x7f3d703b5700 thread 0x7f3d6f236710:540488756 started 0x7f3d703b5700 thread 0x7f3d6f22e710:540160820 started 0x7f3d703b5700 thread 0x7f3d6f26e710 stopped 0x7f3d703b5700 thread 0x7f3d6f27e710 stopped 0x7f3d703b5700 thread 0x7f3d6f22e710 stopped 0x7f3d703b5700 thread 0x7f3d7039b710 stopped 0x7f3d703b5700 thread 0x7f3d70373710 stopped 0x7f3d703b5700 thread 0x7f3d6f286710 stopped 0x7f3d703b5700 thread 0x7f3d703b3710 stopped 0x7f3d703b5700 thread 0x7f3d6f276710 stopped 0x7f3d703b5700 thread 0x7f3d7038b710 stopped 0x7f3d703b5700 thread 0x7f3d70393710 stopped 0x7f3d703b5700 thread 0x7f3d7037b710 stopped 0x7f3d703b5700 thread 0x7f3d6f23e710 stopped 0x7f3d703b5700 thread 0x7f3d6f24e710 stopped 0x7f3d703b5700 thread 0x7f3d703a3710 stopped 0x7f3d703b5700 thread 0x7f3d6f256710 stopped 0x7f3d703b5700 thread 0x7f3d6f266710 stopped 0x7f3d703b5700 thread 0x7f3d6f246710 stopped 0x7f3d703b5700 thread 0x7f3d6f236710 stopped 0x7f3d703b5700 thread 0x7f3d703ab710 stopped 0x7f3d703b5700 thread 0x7f3d7036b710 stopped 0x7f3d703b5700 thread 0x7f3d6f25e710 stopped 0x7f3d703b5700 thread 0x7f3d70383710 stopped 0x7f3d703b5700 thread 0x7f3d704d4710 stopped 0x7f3d703b5700 thread 0x7f3d6f28e710 stopped Ehm, varnish on production machine is 2.1.3, on testing platform is 2.1.5. Well - I'm doing it wrong - I know... but, how to do it correctly? Any idea or hint? Thanks! Kind regards, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From steve.webster at lovefilm.com Fri Mar 4 13:39:25 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 12:39:25 +0000 Subject: Processing ESIs in parallel In-Reply-To: <61061.1299236844@critter.freebsd.dk> References: <61061.1299236844@critter.freebsd.dk> Message-ID: On 4 Mar 2011, at 11:07, Poul-Henning Kamp wrote: > In message , Steve Webster w > rites: > >> I know there has been a lot of work on ESIs for Varnish 3, but as far as I >> can tell they are still processed serially. Are there any plans to switch to >> a parallel processing model? If not, might this be a worthy feature request >> for a future version of Varnish?s > > I wouldn't call them "plans", but it is on our wish-list. This is good news. > It is not simple though, so don't hold your breath. Indeed. I had one of those "how hard could this be" moments and started trying to implement it myself, then realised I had opened a can of worms and decided to leave Varnish hacking to the experts. I have a workaround for now ? a custom Apache output filter that uses LWP::Parallel ? so thankfully breathe-holding isn't necessary. Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 15:43:42 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 09:43:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Check your ec2 network settings. OS and varnish settings look okay, your varnishstat shows that varnish is coasting along fine. It's not threads. You have 800 available, according to the varnishstat; it's running with 800 threads, handling 12,000+ connections, and there is no thread creation failure. Therefore it does not need to add threads. What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 18:47:10 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 14:47:10 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D71259E.1090305@gmail.com> # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } From nadahalli at gmail.com Fri Mar 4 19:22:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 13:22:58 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 20:25:12 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 14:25:12 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> There's no health check in the backend. Not sure what that does with a one hour grace. I set a short grace with if (req.backend.healthy) { set req.grace = 60s; } else { set req.grace = 4h; } You also don't appear to select a backend in recv. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:23 PM To: varnish-misc at varnish-cache.org Subject: Re: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 20:30:00 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 14:30:00 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > There?s no health check in the backend. Not sure what that does with a one > hour grace. I set a short grace with > > > > if (req.backend.healthy) { > > set req.grace = 60s; > > } else { > > set req.grace = 4h; > > } > I am still to add health-checks, directors, etc. Will add them soon. But those make sense if the cache-primed performance is good. In my test, I am requesting URLs who I know are already in the cache. Varnishstat also shows that - there are no cache misses at all. > > > You also don?t appear to select a backend in recv. > The default backend seems to be getting picked up automatically. -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:23 PM > > *To:* varnish-misc at varnish-cache.org > *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan > wrote: > > > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > > > The DNS requests are getting resolved super quick. I am unable to see any > other network issues with EC2. I have a similar machine in the same data > center running nginx which is doing similar loads, but with no caching > requirement, and it's running fine. > > In my first post, I forgot to attach my VCL, which is a bit too minimal. Am > I missing something obvious? > > ------ > backend default0 { > .host = "10.202.30.39"; > .port = "8000"; > } > > sub vcl_recv { > unset req.http.Cookie; > set req.grace = 3600s; > set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); > } > > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > } > ------------------------- > > Could there be some kind of TCP packet pileup that I am missing? > > -T > > > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 21:19:34 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 15:19:34 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: Under loaded conditions (3 machines doing httperf separately), I did a separate wget on the side, and am attaching the TCPDUMP of that request. As you can see, there is a delay in the middle where varnish didn't respond immediately. If thread/hit-rate conditions are optimal, this delay should be minimal I thought. Any help would be appreciated. -T On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > >> There?s no health check in the backend. Not sure what that does with a one >> hour grace. I set a short grace with >> >> >> >> if (req.backend.healthy) { >> >> set req.grace = 60s; >> >> } else { >> >> set req.grace = 4h; >> >> } >> > > I am still to add health-checks, directors, etc. Will add them soon. But > those make sense if the cache-primed performance is good. In my test, I am > requesting URLs who I know are already in the cache. Varnishstat also shows > that - there are no cache misses at all. > > >> >> >> You also don?t appear to select a backend in recv. >> > > The default backend seems to be getting picked up automatically. > > -T > > >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:23 PM >> >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >> wrote: >> >> >> >> What does something like firebug show when you request during the load >> test? The delay may be anything from DNS to the ec2 network. >> >> >> The DNS requests are getting resolved super quick. I am unable to see any >> other network issues with EC2. I have a similar machine in the same data >> center running nginx which is doing similar loads, but with no caching >> requirement, and it's running fine. >> >> In my first post, I forgot to attach my VCL, which is a bit too minimal. >> Am I missing something obvious? >> >> ------ >> backend default0 { >> .host = "10.202.30.39"; >> .port = "8000"; >> } >> >> sub vcl_recv { >> unset req.http.Cookie; >> set req.grace = 3600s; >> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >> } >> >> sub vcl_deliver { >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> } else { >> set resp.http.X-Cache = "MISS"; >> } >> } >> ------------------------- >> >> Could there be some kind of TCP packet pileup that I am missing? >> >> -T >> >> >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:09 AM >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> Hi Everyone, >> >> I am seeing a situation similar to : >> >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >> >> I have httperf loading a varnish cache with never-expire content. While >> the load is on, other browser/wget requests to the varnish server get >> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >> to be impacted. So, is it some kind of thread problem? >> >> In production, I see a similar situation with around 1000 req/second load. >> >> >> I am running varnishd with the following command line options (as per >> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -t 31536000 >> >> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >> >> My network sysctl parameters are tuned according to: >> http://varnish-cache.org/trac/wiki/Performance >> fs.file-max = 360000 >> net.ipv4.ip_local_port_range = 1024 65536 >> net.core.rmem_max = 16777216 >> net.core.wmem_max = 16777216 >> net.ipv4.tcp_rmem = 4096 87380 16777216 >> net.ipv4.tcp_wmem = 4096 65536 16777216 >> net.ipv4.tcp_fin_timeout = 3 >> net.core.netdev_max_backlog = 30000 >> net.ipv4.tcp_no_metrics_save = 1 >> net.core.somaxconn = 262144 >> net.ipv4.tcp_syncookies = 0 >> net.ipv4.tcp_max_orphans = 262144 >> net.ipv4.tcp_max_syn_backlog = 262144 >> net.ipv4.tcp_synack_retries = 2 >> net.ipv4.tcp_syn_retries = 2 >> >> >> Any help would be greatly appreciated >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 20:15:46.896200 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [S], seq 975218147, win 5840, options [mss 1460,sackOK,TS val 239507633 ecr 0,nop,wscale 6], length 0 20:15:46.896220 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [S.], seq 2642556500, ack 975218148, win 5792, options [mss 1460,sackOK,TS val 267323553 ecr 239507633,nop,wscale 9], length 0 20:15:46.932874 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 0 20:15:46.932900 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [P.], seq 1:341, ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 340 20:15:46.933404 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], ack 341, win 14, options [nop,nop,TS val 267323556 ecr 239507639], length 0 20:16:07.129730 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 1:2897, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 2896 20:16:07.129752 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 2897:4345, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 1448 20:16:07.138422 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1449, win 137, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138439 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 4345:5793, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 1448 20:16:07.138446 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [P.], seq 5793:5998, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 205 20:16:07.138450 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 2897, win 182, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138456 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 4345, win 227, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.148340 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5793, win 273, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148350 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148353 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [F.], seq 341, ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148441 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [F.], seq 5998, ack 342, win 14, options [nop,nop,TS val 267325578 ecr 239512699], length 0 20:16:07.156951 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5999, win 318, options [nop,nop,TS val 239512702 ecr 267325578], length 0 From nadahalli at gmail.com Fri Mar 4 22:01:42 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 16:01:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: According to http://www.spinics.net/lists/linux-net/msg17545.html - it might be due to "Overflowing the listen() command's incoming connection backlog." I simulated my load again, and here're the listen status before and during the test. Before: 3689345 times the listen queue of a socket overflowed 3689345 SYNs to LISTEN sockets dropped During: 3690354 times the listen queue of a socket overflowed 3690354 SYNs to LISTEN sockets dropped My net.core.somaxconn = 262144, which is pretty high. So, I cannot see what else I can do to increase the backlog's length. Is the only way to add more Varnish servers and load balance them behind Nginx or some such? -T On Fri, Mar 4, 2011 at 3:19 PM, Tejaswi Nadahalli wrote: > Under loaded conditions (3 machines doing httperf separately), I did a > separate wget on the side, and am attaching the TCPDUMP of that request. As > you can see, there is a delay in the middle where varnish didn't respond > immediately. If thread/hit-rate conditions are optimal, this delay should be > minimal I thought. > > Any help would be appreciated. > > -T > > > On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > >> On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: >> >>> There?s no health check in the backend. Not sure what that does with a >>> one hour grace. I set a short grace with >>> >>> >>> >>> if (req.backend.healthy) { >>> >>> set req.grace = 60s; >>> >>> } else { >>> >>> set req.grace = 4h; >>> >>> } >>> >> >> I am still to add health-checks, directors, etc. Will add them soon. But >> those make sense if the cache-primed performance is good. In my test, I am >> requesting URLs who I know are already in the cache. Varnishstat also shows >> that - there are no cache misses at all. >> >> >>> >>> >>> You also don?t appear to select a backend in recv. >>> >> >> The default backend seems to be getting picked up automatically. >> >> -T >> >> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:23 PM >>> >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >>> wrote: >>> >>> >>> >>> What does something like firebug show when you request during the load >>> test? The delay may be anything from DNS to the ec2 network. >>> >>> >>> The DNS requests are getting resolved super quick. I am unable to see any >>> other network issues with EC2. I have a similar machine in the same data >>> center running nginx which is doing similar loads, but with no caching >>> requirement, and it's running fine. >>> >>> In my first post, I forgot to attach my VCL, which is a bit too minimal. >>> Am I missing something obvious? >>> >>> ------ >>> backend default0 { >>> .host = "10.202.30.39"; >>> .port = "8000"; >>> } >>> >>> sub vcl_recv { >>> unset req.http.Cookie; >>> set req.grace = 3600s; >>> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >>> } >>> >>> sub vcl_deliver { >>> if (obj.hits > 0) { >>> set resp.http.X-Cache = "HIT"; >>> } else { >>> set resp.http.X-Cache = "MISS"; >>> } >>> } >>> ------------------------- >>> >>> Could there be some kind of TCP packet pileup that I am missing? >>> >>> -T >>> >>> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:09 AM >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> Hi Everyone, >>> >>> I am seeing a situation similar to : >>> >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >>> >>> I have httperf loading a varnish cache with never-expire content. While >>> the load is on, other browser/wget requests to the varnish server get >>> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >>> to be impacted. So, is it some kind of thread problem? >>> >>> In production, I see a similar situation with around 1000 req/second >>> load. >>> >>> I am running varnishd with the following command line options (as per >>> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -t 31536000 >>> >>> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >>> >>> My network sysctl parameters are tuned according to: >>> http://varnish-cache.org/trac/wiki/Performance >>> fs.file-max = 360000 >>> net.ipv4.ip_local_port_range = 1024 65536 >>> net.core.rmem_max = 16777216 >>> net.core.wmem_max = 16777216 >>> net.ipv4.tcp_rmem = 4096 87380 16777216 >>> net.ipv4.tcp_wmem = 4096 65536 16777216 >>> net.ipv4.tcp_fin_timeout = 3 >>> net.core.netdev_max_backlog = 30000 >>> net.ipv4.tcp_no_metrics_save = 1 >>> net.core.somaxconn = 262144 >>> net.ipv4.tcp_syncookies = 0 >>> net.ipv4.tcp_max_orphans = 262144 >>> net.ipv4.tcp_max_syn_backlog = 262144 >>> net.ipv4.tcp_synack_retries = 2 >>> net.ipv4.tcp_syn_retries = 2 >>> >>> >>> Any help would be greatly appreciated >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drais at icantclick.org Fri Mar 4 23:48:31 2011 From: drais at icantclick.org (david raistrick) Date: Fri, 4 Mar 2011 17:48:31 -0500 (EST) Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > Is the only way to add more Varnish servers and load balance them behind > Nginx or some such? Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a limiting factor if all traffic only goes through that path. I haven't followed the rest of the thread to know where your real bottleneck is, but just keep that in mind. ;) Your next alternatives (this looks like you're @ AWS) would be ELB in front of varnish (which I do, but with mixed success), or a GSLB (dns based loadbalancing) service in the DNS adding an additional level of seperation. (we use akadns and I have lots of praises and no complaints yet. :) -- david raistrick http://www.netmeister.org/news/learn2quote.html drais at icantclick.org http://www.expita.com/nomime.html From nadahalli at gmail.com Sat Mar 5 01:39:56 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 19:39:56 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: I added an Nginx server in front of the varnish cache, and things are swimming just fine now. Does it have something to do with accepting requests from different hosts? Where Nginx does better out of the box than Varnish does? -T On Fri, Mar 4, 2011 at 5:48 PM, david raistrick wrote: > On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > > Is the only way to add more Varnish servers and load balance them behind >> Nginx or some such? >> > > Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a > limiting factor if all traffic only goes through that path. > > I haven't followed the rest of the thread to know where your real > bottleneck is, but just keep that in mind. ;) > > Your next alternatives (this looks like you're @ AWS) would be ELB in front > of varnish (which I do, but with mixed success), or a GSLB (dns based > loadbalancing) service in the DNS adding an additional level of seperation. > (we use akadns and I have lots of praises and no complaints yet. :) > > > > > -- > david raistrick http://www.netmeister.org/news/learn2quote.html > drais at icantclick.org http://www.expita.com/nomime.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Sat Mar 5 10:48:20 2011 From: ronan at iol.ie (Ronan Mullally) Date: Sat, 5 Mar 2011 09:48:20 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Message-ID: Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 From brice at digome.com Sat Mar 5 20:52:54 2011 From: brice at digome.com (Brice Burgess) Date: Sat, 05 Mar 2011 13:52:54 -0600 Subject: varnishncsa & VirtualHost Message-ID: <4D729496.7010301@digome.com> I was previously running a SVN build of Varnish 2.1.4 which included fixes for timeouts with Content-Length. At the time there was no 2.1.5 debian package. I also applied the "-v virtualhost patch" [ticket 485] to varnishncsa to support virtualhost logging (as this is a multi-website webserver). Yesterday we updated to Debian Squeeze and I figured it a good time to switch back to official varnish-cache.org debs. We are now running varnish 2.1.5 but to my dismay I cannot get VirtualHost logging in varnishncsa? Apparently the logformat (-F) switch did not make it into this release?? This was a bad presumption. Are there any current solutions for getting virtualhost logging to work? Are there any unofficial .debs supporting the -F or -v options for varnishncsa? Many thanks, ~ Brice From mattias at nucleus.be Sun Mar 6 22:05:05 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Sun, 6 Mar 2011 22:05:05 +0100 Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Hi Ronan, Not sure if you've managed to test this yet, but Google seem to run with "Accept-Encoding: gzip". Perhaps there's a problem serving the compressed version, whereas your manual wget's don't use this accept-encoding? Regards, Mattias -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: zaterdag 5 maart 2011 10:48 To: varnish-misc at varnish-cache.org Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Sun Mar 6 23:39:41 2011 From: straightflush at gmail.com (AD) Date: Sun, 6 Mar 2011 17:39:41 -0500 Subject: Lots of configs Message-ID: Hello, what is the best way to run an instance of varnish that may need different vcl configurations for each hostname. This could end up being 100-500 includes to map to each hostname and then a long if/then block based on the hostname. Is there a more scalable way to deal with this? We have been toying with running one large varnish instance with tons of includes or possibly running multiple instances of varnish (with the config broken up) or spreading the load across different clusters (kind of like sharding) based on hostname to keep the configuration simple. Any best practices here? Are there any notes on the performance impact of the size of the VCL or the amount of if/then statements in vcl_recv to process a unique call function ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 17:50:33 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 13:50:33 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D711859.2010101@gmail.com> Hello everyone. I am using varnish to cache my Plone site, with xdv. I have the following configuration: nginx - varnish - nginx (apply xdv transf) - haproxy - plone. My problem is that the first time I open a page, my esi includes are not interpreted.. I get a blank content, and in firebug I can see the esi statement. (If I ask firefox to show me the source, it makes a new request, so the source displayed has the correct replacements). If I reload the page, or open it in a new tab everything works perfectly. The problem is only the first time a browser open the pages. If I close and reopen the browser, the first time the page is opened, the error appears again.. My varnish.vcl config: # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } Thanks for all, Rafael From neltnerb at MIT.EDU Sat Mar 5 04:33:05 2011 From: neltnerb at MIT.EDU (Brian Neltner) Date: Fri, 04 Mar 2011 20:33:05 -0700 Subject: Hosting multiple virtualhosts in apache2 Message-ID: <1299295985.23065.22.camel@zeeman> Dear Varnish, I'll preface this with saying that I am not an IT person, and so although I think I sort of get the gist of how this all works, if I don't have fairly explicit instructions on how things work I get very confused. That said, I have a slicehost server hosting http://saikoled.com which has varnish as a frontend. Varnish listens on port 80, and apache2 listens on port 8080 for ServerName www.saikoled.com with ServerAliases for saikoled.com, saikoled.net, and www.saikoled.net. What I want to do is have the slice host a different website from the same IP address, microreactorsolutions.com. I *think* that I know how to set apache2 up with a virtualhost for this, and my thought was to tell it that the virtualhost should listen on port 8079 instead of 8080 (although maybe this isn't necessary). To try to do this, I looked at the documentation for Advanced Backend Documentation here (http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html). However, the application they're looking at here is sufficiently different from what I want to do (although frustratingly close), that I can't tell what to do. It seems that this is setup to have a subdirectory that matches the regexp "^/java/" go to the other port on the backend, which is all well and good, but this doesn't seem to be something that is likely to work with a totally different ServerName (after all, the ^ suggests pretty strongly that the matching doesn't begin until after the ServerName). I also saw in the "Health Checks" some stuff that looked like it did in fact do some stuff with actual ServerNames, but I really don't get how to tell Varnish where to pull requests on port 80 from which as far as I can see is done with regexps that don't handle what I'm looking for. Sorry if this is covered somewhere more obscure in the manual, but as I said, I'm really not particularly good with computers despite the mit email address (I do chemistry...), and trying to work through this entire manual in detail is going to drive me crazy. Best, Brian Neltner From david at firechaser.com Mon Mar 7 09:33:25 2011 From: david at firechaser.com (David Murphy) Date: Mon, 7 Mar 2011 08:33:25 +0000 Subject: Hosting multiple virtualhosts in apache2 In-Reply-To: <1299295985.23065.22.camel@zeeman> References: <1299295985.23065.22.camel@zeeman> Message-ID: Hi Brian Unless the second site is doing something unusual, I don't think you need worry about having its virtualhost listen on another port. Just have all of your websites configured to run on port 8080 and then any site-specific rules (such as which pages/assets can be cached) can be added to the VCL file. We have a server that has a Varnish front end and about 6 or 7 very different websites running under Apache (port 8080) on the backend, all with their own unique domain names. For the most part all sites share the same rules e.g. such as 'always cache images' and 'never cache .php' but a couple of sites need to be treated different e.g. 'do not cache anything in the /blah directory of site abc' and we add that rule to the VCL file. Best, David On Sat, Mar 5, 2011 at 3:33 AM, Brian Neltner wrote: > Dear Varnish, > > I'll preface this with saying that I am not an IT person, and so > although I think I sort of get the gist of how this all works, if I > don't have fairly explicit instructions on how things work I get very > confused. > > That said, I have a slicehost server hosting http://saikoled.com which > has varnish as a frontend. Varnish listens on port 80, and apache2 > listens on port 8080 for ServerName www.saikoled.com with ServerAliases > for saikoled.com, saikoled.net, and www.saikoled.net. > > What I want to do is have the slice host a different website from the > same IP address, microreactorsolutions.com. > > I *think* that I know how to set apache2 up with a virtualhost for this, > and my thought was to tell it that the virtualhost should listen on port > 8079 instead of 8080 (although maybe this isn't necessary). > > To try to do this, I looked at the documentation for Advanced Backend > Documentation here > ( > http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html > ). > > However, the application they're looking at here is sufficiently > different from what I want to do (although frustratingly close), that I > can't tell what to do. It seems that this is setup to have a > subdirectory that matches the regexp "^/java/" go to the other port on > the backend, which is all well and good, but this doesn't seem to be > something that is likely to work with a totally different ServerName > (after all, the ^ suggests pretty strongly that the matching doesn't > begin until after the ServerName). > > I also saw in the "Health Checks" some stuff that looked like it did in > fact do some stuff with actual ServerNames, but I really don't get how > to tell Varnish where to pull requests on port 80 from which as far as I > can see is done with regexps that don't handle what I'm looking for. > > Sorry if this is covered somewhere more obscure in the manual, but as I > said, I'm really not particularly good with computers despite the mit > email address (I do chemistry...), and trying to work through this > entire manual in detail is going to drive me crazy. > > Best, > Brian Neltner > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 14:02:27 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 08:02:27 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D74D763.706@sbgnet.com> The best way would be to use a jump table. By that, I mean to make multiple subroutines in C, and then to jump to the different subroutines by looking up pointers to the subroutines using a string hashing/lookup system. You would also need a flag to indicate whether the hash has been 'initialized' yet as well. The initialization would consist of storing function pointers at the hash locations corresponding to each of the domains. I attempted to do this myself when I first started using varnish, but I was having problems with varnish crashing when attempting to use the code I wrote in C. There may be limitations to the C code that can be used. On 3/6/2011 5:39 PM, AD wrote: > Hello, > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long if/then > block based on the hostname. Is there a more scalable way to deal > with this? We have been toying with running one large varnish > instance with tons of includes or possibly running multiple instances > of varnish (with the config broken up) or spreading the load across > different clusters (kind of like sharding) based on hostname to keep > the configuration simple. > > Any best practices here? Are there any notes on the performance > impact of the size of the VCL or the amount of if/then statements in > vcl_recv to process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 15:23:54 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 09:23:54 -0500 Subject: Lots of configs In-Reply-To: <4D74D763.706@sbgnet.com> References: <4D74D763.706@sbgnet.com> Message-ID: but dont all the configs need to be loaded at runtime, not sure the overhead here? I think what you mentioned seems like a really innovative way to "call" the function but what about anyimpact to "loading" all these configs? If i understand what you are saying, i put a "call test_func;" in vcl_recv which turned into this in C if (VGC_function_test_func(sp)) return (1); if Are you suggesting your hash_table would take over this step ? Adam On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to jump to the > different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at the hash > locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, but I was > having problems with varnish > crashing when attempting to use the code I wrote in C. There may be > limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: > > Hello, > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? We have been > toying with running one large varnish instance with tons of includes or > possibly running multiple instances of varnish (with the config broken up) > or spreading the load across different clusters (kind of like sharding) > based on hostname to keep the configuration simple. > > Any best practices here? Are there any notes on the performance impact > of the size of the VCL or the amount of if/then statements in vcl_recv to > process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 15:45:41 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 09:45:41 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> Message-ID: <4D74EF95.7090907@sbgnet.com> vcl configuration is turned straight into C first of all. You can put your own C code in both the functions and globally. When including headers/libraries, you essentially just have to include the code globally. I am not sure if there is any 'init' function when varnish is called, so I was suggesting that the hash be initiated by just checking if the hash has been created yet. This will cause a penalty to the first vcl_recv call that goes through; but that shouldn't matter. Note that I just passed a dummy number as an example to the custom config, and that I didn't show how to do anything in the custom function. In this example, all custom stuff would be in straight C. You would need to use varnish itself to compile what config you want and look at the C code it generates to figure out how to tie in all your custom configs.... eg: C{ #include "hash.c" // a string hashing store/lookup libary; you'll need to write one // or possibly just use some freely available one. hashc *hash=0; void init_hash() { if( hash ) return; hash.store( 'test.com', &test_com ); // same for all domains } void test_com( int n ) { // custom vcl_recv stuff for domain 'test' } } sub vcl_recv { C{ char *domain; // [ place some code to fetch domain and put it in domain here ] if( !hash ) init_hash(); void (*func)(int); func = hash.lookup( domain ); func(1); } } On 3/7/2011 9:23 AM, AD wrote: > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really > innovative way to "call" the function but what about anyimpact to > "loading" all these configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski > wrote: > > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to > jump to the different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at > the hash locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, > but I was having problems with varnish > crashing when attempting to use the code I wrote in C. There may > be limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: >> Hello, >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? We have been toying with running one >> large varnish instance with tons of includes or possibly running >> multiple instances of varnish (with the config broken up) or >> spreading the load across different clusters (kind of like >> sharding) based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance >> impact of the size of the VCL or the amount of if/then statements >> in vcl_recv to process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 15:58:08 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 14:58:08 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup Message-ID: I am looking into supporting Google?s ?First Click Free for Web Search?. I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Mon Mar 7 16:05:08 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Mon, 7 Mar 2011 16:05:08 +0100 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Hi, I would look at the user agent to verify if it's a GoogleBot or not, as that's more easily checked via VCL. All GoogleBots also adhere to the correct User-Agent. There really aren't that many users that spoof their User-Agent to gain extra access. Also keep in mind that serving GoogleBot different content than actual users will get you penalties in SEO, eventually dropping your Google ranking. Just, FYI. Regards, Mattias From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, Richard Sent: maandag 7 maart 2011 15:58 To: varnish-misc at varnish-cache.org Subject: Let GoogleBot Crawl full content, reverse DNS lookup I am looking into supporting Google's "First Click Free for Web Search". I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard From richard.chiswell at mangahigh.com Mon Mar 7 16:08:03 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 07 Mar 2011 15:08:03 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <4D74F4D3.6040008@mangahigh.com> On 07/03/2011 14:58, Lane, Richard wrote: > > I am looking into supporting Google?s ?First Click Free for Web > Search?. I need to allow the GoogleBots to index the full content of > my sites but still maintain the Registration wall for everyone else. > Google suggests that you detect there GoogleBots by reverse DNS lookup > of the requesters IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? I believe this /could/ be done using a C function, but it's not something I've had experience of before. What you could do is detect the Google user-agent in varnish, and then pass that and the IP to a backend script with the original request: such as /* Varnish 2.0.6 psuedo code - may need updating */ if (req.http.user-agent == "Googlebot") { set.http.x-varnish-originalurl = req.url; set req.url = "/googlecheck?ip= " client.ip "&originalurl=" req.url; lookup; } and the Googlecheck script actually does the rDNS look up and if it matches, it returns the contents of the requested url. Richard Chiswell http://www.mangahigh.com (Speaking personally yadda yadda) -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 16:30:22 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 10:30:22 -0500 Subject: Lots of configs In-Reply-To: <4D74EF95.7090907@sbgnet.com> References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: Cool, as for the startup, i wonder if you can instead of trying to insert into VCL_Init, try to do just, as part of the startup process hit a special URL to load the hash_Table. Or another possibility might be to load an external module, and in there, populate the hash. On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski wrote: > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to include the > code globally. > > I am not sure if there is any 'init' function when varnish is called, so I > was suggesting that > the hash be initiated by just checking if the hash has been created yet. > > This will cause a penalty to the first vcl_recv call that goes through; but > that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom config, > and that > I didn't show how to do anything in the custom function. In this example, > all custom > stuff would be in straight C. You would need to use varnish itself to > compile what config > you want and look at the C code it generates to figure out how to tie in > all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; you'll need to > write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > hash.store( 'test.com', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: > > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really innovative > way to "call" the function but what about anyimpact to "loading" all these > configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to the >> different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. >> >> You would also need a flag to indicate whether the hash has been >> 'initialized' yet as well. >> The initialization would consist of storing function pointers at the hash >> locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using varnish, but I >> was having problems with varnish >> crashing when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >> >> Hello, >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? We have >> been toying with running one large varnish instance with tons of includes or >> possibly running multiple instances of varnish (with the config broken up) >> or spreading the load across different clusters (kind of like sharding) >> based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance impact >> of the size of the VCL or the amount of if/then statements in vcl_recv to >> process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 16:56:20 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 10:56:20 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: <4D750024.1060400@sbgnet.com> It is true that there are potentially better places to setup the hash, but it is best to check for a null pointer for the hash object anyway any time you use it. The setup itself is also very fast; you just don't want to do it every time. Note in my init function I forgot a 'hash = new hashc()'. Also; if you are going to do this, you will likely have a preset list of domains you are using. In such a case, the best type of hash to use would be a 'minimal perfect hash'. You could use the 'gperf' library to generate a suitable algorithm to map your domain strings into an array. On 3/7/2011 10:30 AM, AD wrote: > Cool, as for the startup, i wonder if you can instead of trying to > insert into VCL_Init, try to do just, as part of the startup process > hit a special URL to load the hash_Table. Or another possibility > might be to load an external module, and in there, populate the hash. > > > > On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski > wrote: > > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to > include the code globally. > > I am not sure if there is any 'init' function when varnish is > called, so I was suggesting that > the hash be initiated by just checking if the hash has been > created yet. > > This will cause a penalty to the first vcl_recv call that goes > through; but that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom > config, and that > I didn't show how to do anything in the custom function. In this > example, all custom > stuff would be in straight C. You would need to use varnish itself > to compile what config > you want and look at the C code it generates to figure out how to > tie in all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; > you'll need to write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > > hash.store( 'test.com ', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: >> but dont all the configs need to be loaded at runtime, not sure >> the overhead here? I think what you mentioned seems like a >> really innovative way to "call" the function but what about >> anyimpact to "loading" all these configs? >> >> If i understand what you are saying, i put a "call test_func;" in >> vcl_recv which turned into this in C >> >> if (VGC_function_test_func(sp)) >> return (1); >> if >> >> Are you suggesting your hash_table would take over this step ? >> >> Adam >> >> On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski >> > wrote: >> >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then >> to jump to the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup >> system. >> >> You would also need a flag to indicate whether the hash has >> been 'initialized' yet as well. >> The initialization would consist of storing function pointers >> at the hash locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish >> crashing when attempting to use the code I wrote in C. There >> may be limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >>> Hello, >>> what is the best way to run an instance of varnish that may >>> need different vcl configurations for each hostname. This >>> could end up being 100-500 includes to map to each hostname >>> and then a long if/then block based on the hostname. Is >>> there a more scalable way to deal with this? We have been >>> toying with running one large varnish instance with tons of >>> includes or possibly running multiple instances of varnish >>> (with the config broken up) or spreading the load across >>> different clusters (kind of like sharding) based on hostname >>> to keep the configuration simple. >>> >>> Any best practices here? Are there any notes on the >>> performance impact of the size of the VCL or the amount of >>> if/then statements in vcl_recv to process a unique call >>> function ? >>> >>> Thanks >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Mon Mar 7 16:56:09 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 07:56:09 -0800 Subject: Weird "^" in the regex of varnish Message-ID: Hi Guys I encountered this issue in two different environment(env1 and env2). The sample code is like: in vcl_fetch() else if (req.url ~ "^/tables/\w{6}/summary.js") { if (req.http.Set-Cookie !~ " u=\w") { unset beresp.http.Set-Cookie; set beresp.ttl = 2h; set beresp.grace = 22h; return(deliver); } else { return(pass); } } In env1, the request like http://mytest.com/api/v2/tables/vyulrh/read.jsamlcan enter lookup and then enter fetch to create a new cache entry. Next time, the same request will hit cache and do not do fetch anymore In env2, the same request enter and go into vcl_fetch, the regex will fail and can not enter deliver, so the resp will be sent to end user without cache creating. I'm not sure if there is somebody has the same issue. Is it platform related ? R -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 17:51:49 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 16:51:49 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Message-ID: Mattias, I am aware of Google's policy about serving different content to search users, which is why I am have to implement their "First Click Free" program. I will use the User-Agent but need to go a step further and verify the crawler is who they say they are by DNS. Cheers, Richard On 3/7/11 9:05 AM, "Mattias Geniar" wrote: > Hi, > > I would look at the user agent to verify if it's a GoogleBot or not, as > that's more easily checked via VCL. All GoogleBots also adhere to the > correct User-Agent. > There really aren't that many users that spoof their User-Agent to gain > extra access. > > Also keep in mind that serving GoogleBot different content than actual > users will get you penalties in SEO, eventually dropping your Google > ranking. Just, FYI. > > Regards, > Mattias > > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, > Richard > Sent: maandag 7 maart 2011 15:58 > To: varnish-misc at varnish-cache.org > Subject: Let GoogleBot Crawl full content, reverse DNS lookup > > > I am looking into supporting Google's "First Click Free for Web Search". > I need to allow the GoogleBots to index the full content of my sites but > still maintain the Registration wall for everyone else. Google suggests > that you detect there GoogleBots by reverse DNS lookup of the requesters > IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? > > System Desc: > Varnish 2.1.4 > RHEL 5-4 > Apache 2.2x > > - Richard From perbu at varnish-software.com Mon Mar 7 19:35:36 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 19:35:36 +0100 Subject: Lots of configs In-Reply-To: References: Message-ID: Hi, On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? > CPU and memory bandwidth is abundant on modern servers. I'm actually not sure that having a 500 entries long if/else statement will hamper performance at all. Remember, there will be no system calls. I would guess a modern server will execute at least a four million regex-based if/else per second per CPU core if most of the code and data will be in the on die cache. So executing 500 matches should take about 0.5ms. It might not make sense to optimize this. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 19:52:22 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 13:52:22 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D752966.7000203@sbgnet.com> A modern CPU can run, at most, around 10 million -assembly based- instructions per second. See http://en.wikipedia.org/wiki/Instructions_per_second A regular expression compare is likely at least 20 or so assembly instructions. That gives around 500,000 regular expression compares if you are using 100% of the CPU just for that. A reasonable amount of CPU to consume would be 30% ( at most ). So; you are left with around 150k regular expression checks per second. Lets suppose there are 500 different domains. On average, you will be doing 250 if/else checks per call. 150k / 250 = 600. That means that you will get, under fair conditions, a max of about 600 hits per second. The person asking the question likely has 500 domains running. That gives a little over 1 hit possible per second per domain. Do you think that is an acceptable solution for this person? I think not. Compare it to a hash lookup. A hash lookup, using a good minimal perfect hashing algorithms, will take at most around 10 operations. Using the same math as above, that gives around 300k lookups per second. A hash would be roughly 500 times faster than using if/else... On 3/7/2011 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD > wrote: > > > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long > if/then block based on the hostname. Is there a more scalable way > to deal with this? > > > CPU and memory bandwidth is abundant on modern servers. I'm actually > not sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would > guess a modern server will execute at least a four million regex-based > if/else per second per CPU core if most of the code and data will be > in the on die cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 19:55:09 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 13:55:09 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: Thanks Per. I guess the other part of this was to make the config more scalable so we are not constantly adding if/else blocks. Would by nice to have a way to just do something like call(custom_ + req.hostname) On Mon, Mar 7, 2011 at 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Mon Mar 7 20:45:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Mon, 7 Mar 2011 19:45:43 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> References: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Message-ID: Hi Mattias, On Sun, 6 Mar 2011, Mattias Geniar wrote: > Not sure if you've managed to test this yet, but Google seem to run with > "Accept-Encoding: gzip". Perhaps there's a problem serving the > compressed version, whereas your manual wget's don't use this > accept-encoding? You're spot on. Adding an Accept-Encoding header to my wget requests resulted in failures. The content length reported being longer than that actually retrieved. I tracked the fault down to PHP doing compression via zlib.compression. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: zaterdag 5 maart 2011 10:48 > To: varnish-misc at varnish-cache.org > Subject: Varnish returning 503s for Googlebot requests (Bug #813?) > > Hi, > > I'm a varnish noob. I've only just started rolling out a cache in front > of a VBulletin site running Apache that is currently using pound for > load > balancing. > > I'm running 2.1.5 on a debian lenny box. Testing is going well, apart > from one problem. The site runs VBSEO to generate sitemap files. > Without excpetion, every time Googlebot tries to request these files > Varnish returns a 503: > > 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET > http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" > "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" > > I can request these files via wget direct from the backend as well as > direct from varnish without a problem: > > --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz > > HTTP request sent, awaiting response... > HTTP/1.1 200 OK > Server: Apache > Content-Type: application/x-gzip > Content-Length: 130283 > Date: Sat, 05 Mar 2011 09:23:38 GMT > X-Varnish: 1282440127 > Age: 0 > Via: 1.1 varnish > Connection: keep-alive > Length: 130283 (127K) [application/x-gzip] > Saving to: `/dev/null' > > 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] > > I've reverted back to default.vcl, the only changes being to define my > own > backends. Varnishlog output is below. Having googled a bit the only > thing I've found is bug #813, but that was apparently fixed prior to > 2.1.5. Am I missing something obvious? > > > -Ronan > > > Varnishlog output > > 18 ReqStart c 66.249.66.246 63009 1282436348 > 18 RxRequest c GET > 18 RxURL c /sitemap_362.xml.gz > 18 RxProtocol c HTTP/1.1 > 18 RxHeader c Host: www.sitename.net > 18 RxHeader c Connection: Keep-alive > 18 RxHeader c Accept: */* > 18 RxHeader c From: googlebot(at)googlebot.com > 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 18 RxHeader c Accept-Encoding: gzip,deflate > 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT > 18 VCL_call c recv > 18 VCL_return c lookup > 18 VCL_call c hash > 18 VCL_return c hash > 18 VCL_call c miss > 18 VCL_return c fetch > 18 Backend c 40 sitename sitename1 > 40 TxRequest b GET > 40 TxURL b /sitemap_362.xml.gz > 40 TxProtocol b HTTP/1.1 > 40 TxHeader b Host: www.sitename.net > 40 TxHeader b Accept: */* > 40 TxHeader b From: googlebot(at)googlebot.com > 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 40 TxHeader b Accept-Encoding: gzip,deflate > 40 TxHeader b X-Forwarded-For: 66.249.66.246 > 40 TxHeader b X-Varnish: 1282436348 > 40 RxProtocol b HTTP/1.1 > 40 RxStatus b 200 > 40 RxResponse b OK > 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT > 40 RxHeader b Server: Apache > 40 RxHeader b Content-Length: 130327 > 40 RxHeader b Content-Encoding: gzip > 40 RxHeader b Vary: Accept-Encoding > 40 RxHeader b Content-Type: application/x-gzip > 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 > 18 VCL_call c fetch > 18 VCL_return c deliver > 18 ObjProtocol c HTTP/1.1 > 18 ObjStatus c 200 > 18 ObjResponse c OK > 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT > 18 ObjHeader c Server: Apache > 18 ObjHeader c Content-Encoding: gzip > 18 ObjHeader c Vary: Accept-Encoding > 18 ObjHeader c Content-Type: application/x-gzip > 18 FetchError c straight read_error: 0 > 40 Fetch_Body b 4 4294967295 1 > 40 BackendClose b sitename1 > 18 VCL_call c error > 18 VCL_return c deliver > 18 VCL_call c deliver > 18 VCL_return c deliver > 18 TxProtocol c HTTP/1.1 > 18 TxStatus c 503 > 18 TxResponse c Service Unavailable > 18 TxHeader c Server: Varnish > 18 TxHeader c Retry-After: 0 > 18 TxHeader c Content-Type: text/html; charset=utf-8 > 18 TxHeader c Content-Length: 419 > 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT > 18 TxHeader c X-Varnish: 1282436348 > 18 TxHeader c Age: 1 > 18 TxHeader c Via: 1.1 varnish > 18 TxHeader c Connection: close > 18 Length c 419 > 18 ReqEnd c 1282436348 1299316657.660784483 > 1299316658.684726000 0.478523970 1.023897409 0.000044107 > 18 SessionClose c error > 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From drew.smathers at gmail.com Mon Mar 7 21:58:15 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 15:58:15 -0500 Subject: Varnish still 503ing after adding grace to VCL Message-ID: Hi all, I'm trying to grace as a means of ensuring that cached content is delivered from varnish past it's TTL if backends can't generate a response. With some experiments this does not seem to happen with our setup. After an object is cached, varnish still returns a 503 within the grace period if a backend goes down. Below are details. version: varnish-2.1.4 SVN 5447M I stripped down my VCL to the following to demonstrate: backend webapp { .host = "127.0.0.1"; .port = "8000"; } sub vcl_recv { set req.backend = webapp; set req.grace = 1h; } sub vcl_fetch { set beresp.grace = 24h; } Running varnish: varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 First request: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 200 OK Server: WSGIServer/0.1 Python/2.6.6 Vary: Authorization, Accept-Language, X-Gttv-Apikey Etag: "e9c12380818a05ed40ae7df4dad67751" Content-Type: application/json; charset=utf-8 Content-Language: en Cache-Control: max-age=30 Content-Length: 425 Date: Mon, 07 Mar 2011 16:12:56 GMT X-Varnish: 377135316 377135314 Age: 6 Via: 1.1 varnish Connection: close Wait 30 seconds, kill backend app, then make another request through varnish: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 503 Service Unavailable Server: Varnish Retry-After: 0 Content-Type: text/html; charset=utf-8 Content-Length: 418 Date: Mon, 07 Mar 2011 16:14:02 GMT X-Varnish: 377135317 Age: 0 Via: 1.1 varnish Connection: close Any help or clarification on request grace would be appreciated. Thanks, -Drew From brice at digome.com Mon Mar 7 22:05:52 2011 From: brice at digome.com (Brice Burgess) Date: Mon, 07 Mar 2011 15:05:52 -0600 Subject: varnishncsa and -F option? Message-ID: <4D7548B0.9090608@digome.com> Is there a production-ready version of varnishncsa that supports the -F switch implemented 4 months ago here: http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? The -F / LogFormat switch allows for VirtualHost support -- although appears to have missed the 2.1.5 release? Thanks, ~ Brice From perbu at varnish-software.com Mon Mar 7 22:18:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 22:18:03 +0100 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: Hi David, List. On Mon, Mar 7, 2011 at 7:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using 100% > of the > CPU just for that. A reasonable amount of CPU to consume would be 30% ( at > most ). > So; you are left with around 150k regular expression checks per second. > I guess we should stop speculating. I wrote a short program to do in-cache pcre pattern matching. My laptop (i5 M560) seems to churn through 7M pcre matches a second so I was a bit off. The matches where anchored and small but varying it doesn't seem to affect performance much. The source for my test is here: http://pastebin.com/a68y15hp (.. ) > Compare it to a hash lookup. A hash lookup, using a good minimal perfect > hashing algorithms, > will take at most around 10 operations. Using the same math as above, that > gives around 300k > lookups per second. A hash would be roughly 500 times faster than using > if/else... > Of course a hash lookup is faster. But if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it. Of course it isn't as cool a building a hash table of functions in inline C, but is it useful when the speedup gain is lost in buffer bloat anyway? I think not. Cheers, Per. > > > On 3/7/2011 1:35 PM, Per Buer wrote: > > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.shenton at nasa.gov Mon Mar 7 22:36:41 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Mon, 7 Mar 2011 15:36:41 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? Message-ID: To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. From jhayter at manta.com Mon Mar 7 22:48:37 2011 From: jhayter at manta.com (Jim Hayter) Date: Mon, 7 Mar 2011 21:48:37 +0000 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: /* determine vhost name w/out port number */ set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); set req.http.host = req.http.newhost; This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. If you require the port number at the application level, you will have to do something a bit different to preserve it. Jim -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Shenton, Chris (HQ-LM020)[INDYNE INC] Sent: Monday, March 07, 2011 4:37 PM To: varnish-misc at varnish-cache.org Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Mon Mar 7 23:34:45 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 17:34:45 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: you need to enable a probe in the backend for this to work i believe. On Mon, Mar 7, 2011 at 3:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. With some experiments this does not seem to happen with our > setup. After an object is cached, varnish still returns a 503 within > the grace period if a backend goes down. Below are details. > > version: varnish-2.1.4 SVN 5447M > > I stripped down my VCL to the following to demonstrate: > > backend webapp { > .host = "127.0.0.1"; > .port = "8000"; > } > > sub vcl_recv { > set req.backend = webapp; > set req.grace = 1h; > } > > > sub vcl_fetch { > set beresp.grace = 24h; > } > > Running varnish: > > varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 > > > First request: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 200 OK > Server: WSGIServer/0.1 Python/2.6.6 > Vary: Authorization, Accept-Language, X-Gttv-Apikey > Etag: "e9c12380818a05ed40ae7df4dad67751" > Content-Type: application/json; charset=utf-8 > Content-Language: en > Cache-Control: max-age=30 > Content-Length: 425 > Date: Mon, 07 Mar 2011 16:12:56 GMT > X-Varnish: 377135316 377135314 > Age: 6 > Via: 1.1 varnish > Connection: close > > > Wait 30 seconds, kill backend app, then make another request through > varnish: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 503 Service Unavailable > Server: Varnish > Retry-After: 0 > Content-Type: text/html; charset=utf-8 > Content-Length: 418 > Date: Mon, 07 Mar 2011 16:14:02 GMT > X-Varnish: 377135317 > Age: 0 > Via: 1.1 varnish > Connection: close > > Any help or clarification on request grace would be appreciated. > > Thanks, > -Drew > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Mar 7 23:39:40 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 23:39:40 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. That's "Saint Mode" - please see http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode I see that there isn't too much details on the semantics there. I'll see if I can add some details. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.smathers at gmail.com Mon Mar 7 23:52:44 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 17:52:44 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: > On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers > wrote: >> >> Hi all, >> >> I'm trying to grace as a means of ensuring that cached content is >> delivered from varnish past it's TTL if backends can't generate a >> response. > > That's "Saint Mode" - please > see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode > I see that there isn't too much details on the semantics there. I'll see if > I can add some details. Hi Per, I actually tried using saintmode for this problem but one point that I found tricky is that saintmode (as far as i can tell from docs) can only be set on beresp. If the backend is up, that's great because I can check a non-200 status in vcl_fetch() and set. But in the case of all backends being down, vcl_fetch() doesn't even get invoked and there isn't any other routine and object in the routine's execution context (that I know of) where I can set saintmode and restart. Thanks, -Drew From junxian.yan at gmail.com Tue Mar 8 06:22:13 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 21:22:13 -0800 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: I upgraded varnish to 2.1.5 and used log function to trace the req.url and found there was host name in 'req.url'. But I didn't find any more description about this format in wiki. So I have to do a regsub before entering every function. Dose it make sense? Below is varnish log, 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1299561539 1.0 12 SessionOpen c 10.0.2.130 56799 :6081 12 ReqStart c 10.0.2.130 56799 1589705637 12 RxRequest c GET 12 RxURL c http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 On Mon, Mar 7, 2011 at 7:56 AM, Junxian Yan wrote: > Hi Guys > > I encountered this issue in two different environment(env1 and env2). > The sample code is like: > in vcl_fetch() > > else if (req.url ~ "^/tables/\w{6}/summary.js") { > if (req.http.Set-Cookie !~ " u=\w") { > unset beresp.http.Set-Cookie; > set beresp.ttl = 2h; > set beresp.grace = 22h; > return(deliver); > } else { > return(pass); > } > } > > In env1, the request like > http://mytest.com/api/v2/tables/vyulrh/read.jsaml can enter lookup and > then enter fetch to create a new cache entry. Next time, the same request > will hit cache and do not do fetch anymore > In env2, the same request enter and go into vcl_fetch, the regex will fail > and can not enter deliver, so the resp will be sent to end user without > cache creating. > > I'm not sure if there is somebody has the same issue. Is it platform > related ? > > > R > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Tue Mar 8 07:20:47 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 08 Mar 2011 07:20:47 +0100 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: <4D75CABF.7020403@ruberg.no> On 03/08/2011 06:22 AM, Junxian Yan wrote: > I upgraded varnish to 2.1.5 and used log function to trace the req.url > and found there was host name in 'req.url'. But I didn't find any more > description about this format in wiki. > So I have to do a regsub before entering every function. Dose it make > sense? > > Below is varnish log, > > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1299561539 1.0 > 12 SessionOpen c 10.0.2.130 56799 :6081 > 12 ReqStart c 10.0.2.130 56799 1589705637 > 12 RxRequest c GET > 12 RxURL c > http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 Different User-Agents send different req.url. To normalize them, see http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizingReqUrl Note that technically, there's nothing wrong with using hostnames in req.url, apart from possibly storing the same object under different names. However, as you have found out, some regular expressions might not work as intended until you normalize req.url. -- Bj?rn From tfheen at varnish-software.com Tue Mar 8 07:55:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 07:55:24 +0100 Subject: varnishncsa and -F option? In-Reply-To: <4D7548B0.9090608@digome.com> (Brice Burgess's message of "Mon, 07 Mar 2011 15:05:52 -0600") References: <4D7548B0.9090608@digome.com> Message-ID: <8762ru3xur.fsf@qurzaw.varnish-software.com> ]] Brice Burgess | Is there a production-ready version of varnishncsa that supports the | -F | switch implemented 4 months ago here: | http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? It'll be in 3.0. | The -F / LogFormat switch allows for VirtualHost support -- although | appears to have missed the 2.1.5 release? It was never intended for or aimed at the 2.1 branch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From paul.lu81 at gmail.com Mon Mar 7 21:37:49 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Mon, 7 Mar 2011 12:37:49 -0800 Subject: A lot of if statements to handle hostnames Message-ID: Hi, I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? example pseudo code: ================================= vcl_recv(){ if(req.http.host == "www.domain1.com") { set req.backend = www_domain1_com; # more code return(lookup); } if(req.http.host == "www.domain2.com") { set req.backend = www_domain2_com; # more code return(lookup); } if(req.http.host == "www.domain3.com") { set req.backend = www_domain3_com; # more code return(lookup); } } ================================= Thank you, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Tue Mar 8 08:16:50 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 23:16:50 -0800 Subject: should beresp will be added into cache? Message-ID: Hi Guys I added some logic to change the beresp header in vcl_fetch. And I also do lookup for the same request in vcl_recv. The handling process I expected should be: the first incoming request will be changed by fetch logic and the second request should use the cache with the changed part But the actually result is the change parts are not be cached Here is my code: in vcl_fetch if (req.url ~ "/(images|javascripts|stylesheets)/") { unset beresp.http.Set-Cookie; set beresp.http.Cache-Control = "private, max-age = 3600, must-revalidate"; # 1 hour set beresp.ttl = 10m; set beresp.http.clientcache = "1"; return(deliver); } And I also wanna the response of the second request have the max-age = 3600 and clientcache = 1. The actual result is max-age = 0 and no clientcache in response Found some explanation in varnish doc lib, seems not as exactly as I expected. Is the beresp inserted into cache totally? deliverPossibly insert the object into the cache, then deliver it to the client. Control will eventually pass to vcl_deliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From indranilc at rediff-inc.com Tue Mar 8 08:32:53 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 8 Mar 2011 07:32:53 -0000 Subject: =?utf-8?B?UmU6IEEgbG90IG9mIGlmIHN0YXRlbWVudHMgdG8gaGFuZGxlIGhvc3RuYW1lcw==?= Message-ID: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Apart from improving the construct to if ... elseif , could you please tell me the reason why you are looking for a different way? Is it only for ease of writing less statements or is there some other reason you foresee? I am asking because we also have a number of similar construct in our vcl. Thanks. Thanks, Neel On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu <paul.lu81 at gmail.com> wrote >Hi, > >I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? > >example pseudo code: >================================= >vcl_recv(){ > > if(req.http.host == "www.domain1.com") > { > set req.backend = www_domain1_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain2.com") > { > set req.backend = www_domain2_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain3.com") > { > set req.backend = www_domain3_com; > # more code > return(lookup); > } >} >================================= > >Thank you, >Paul > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Mar 8 08:39:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:39:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Mon, 07 Mar 2011 08:02:27 EST." <4D74D763.706@sbgnet.com> Message-ID: <39035.1299569948@critter.freebsd.dk> In message <4D74D763.706 at sbgnet.com>, David Helkowski writes: >The best way would be to use a jump table. >By that, I mean to make multiple subroutines in C, and then to jump to >the different subroutines by looking >up pointers to the subroutines using a string hashing/lookup system. The sheer insanity of this proposal had me wondering which vending machine gave you a CS degree instead of the cola you ordered. But upon reading: >I attempted to do this myself when I first started using >varnish, but I was having problems with varnish crashing >when attempting to use the code I wrote in C. There may be >limitations to the C code that can be used. I realized that you're probably just some troll trying to have a bit of a party here on our mailing list, or possibly some teenager in his mothers basement, from where you "rulez teh w0rld" because he is quite clearly Gods Gift To Computers. Or quite likely both. The fact that you have to turn to Wikipedia to find out how many instructions a contemporary CPU can execute per second, and then get the answer wrong by about an order of magnitude makes me almost sad for you. But you may have a future in you still, but there are a lot of good books you will have read to unlock it. I would recommend you start out with "The Mythical Man Month", and continue with pretty much anything Kernighan has written on the subject of programming. At some point, you will understand what Dijkstra is talking about here: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html Until then, you should not attempt to do anything with a computer that could harm other people. And now: Please shut up before I mock you. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Tue Mar 8 08:41:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:41:26 +0000 Subject: should beresp will be added into cache? In-Reply-To: Your message of "Mon, 07 Mar 2011 23:16:50 PST." Message-ID: <39065.1299570086@critter.freebsd.dk> In message , Junx ian Yan writes: >Here is my code: >in vcl_fetch > if (req.url ~ "/(images|javascripts|stylesheets)/") { > unset beresp.http.Set-Cookie; > set beresp.http.Cache-Control = "private, max-age = 3600, >must-revalidate"; # 1 hour > set beresp.ttl = 10m; > set beresp.http.clientcache = "1"; > return(deliver); > } > >And I also wanna the response of the second request have the max-age = 3600 >and clientcache = 1. The actual result is max-age = 0 and no clientcache in >response Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From junxian.yan at gmail.com Tue Mar 8 09:23:06 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Tue, 8 Mar 2011 00:23:06 -0800 Subject: should beresp will be added into cache? In-Reply-To: <39065.1299570086@critter.freebsd.dk> References: <39065.1299570086@critter.freebsd.dk> Message-ID: Actually, I need to set clientcache in fetch. But seems varnish can not add this attribute into cache list. On Mon, Mar 7, 2011 at 11:41 PM, Poul-Henning Kamp wrote: > In message , > Junx > ian Yan writes: > > >Here is my code: > >in vcl_fetch > > if (req.url ~ "/(images|javascripts|stylesheets)/") { > > unset beresp.http.Set-Cookie; > > set beresp.http.Cache-Control = "private, max-age = 3600, > >must-revalidate"; # 1 hour > > set beresp.ttl = 10m; > > set beresp.http.clientcache = "1"; > > return(deliver); > > } > > > >And I also wanna the response of the second request have the max-age = > 3600 > >and clientcache = 1. The actual result is max-age = 0 and no clientcache > in > >response > > Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Tue Mar 8 14:03:47 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:03:47 -0500 Subject: Lots of configs In-Reply-To: <39035.1299569948@critter.freebsd.dk> References: <39035.1299569948@critter.freebsd.dk> Message-ID: <4D762933.5080905@sbgnet.com> To write this sort of message, and to the list no doubt, is nothing short of immature. In so much as what I said caused such a response, I apologize for those having bothered to read this. That said, I am going to response to the points made. I would appreciate a 3rd party ( well a 4th at this point ), who has more experience and maturity, would chip in and provide some order to this discussion. On 3/8/2011 2:39 AM, Poul-Henning Kamp wrote: > In message<4D74D763.706 at sbgnet.com>, David Helkowski writes: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to >> the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. > The sheer insanity of this proposal had me wondering which vending > machine gave you a CS degree instead of the cola you ordered. They don't teach jump tables in any college I know of. I believe I first learned about them in my own readings of 'Peter Norton's Assembly Language'; a book I first read perhaps about 15 years ago. I still have the book on the shelf. I don't think Peter Norton would ever call an ingenious solution to a challenging problem 'sheer insanity'. He would very likely laugh at the simplicity of what I am suggesting. > But upon reading: > >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish crashing >> when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be used. > I realized that you're probably just some troll trying to have > a bit of a party here on our mailing list, or possibly some teenager > in his mothers basement, from where you "rulez teh w0rld" because > he is quite clearly Gods Gift To Computers. This is called an Ad hominen attack. Belittling those you interact with in no way betters your opinion. I am also not sure why this is a response to what you quoted me on. I wrote what I did because I am actually curious if someone has time and effort to get hash tables working in VCL. I would to see a working rendition of it. I didn't really spend much time attempting to make it work, because my own usage of VCL didn't end up requiring it. That is, my statement here is an admission of my own lack of knowledge of the limitations of inline C in VCL. I am not trolling and would seriously like to see working hash tables. > Or quite likely both. > > The fact that you have to turn to Wikipedia to find out how many > instructions a contemporary CPU can execute per second, and then > get the answer wrong by about an order of magnitude makes me almost > sad for you. I will test your code and write a subroutine demonstrating the reality of the numbers I have quoted. Once I have done that I will respond to this statement. > But you may have a future in you still, but there are a lot of good > books you will have read to unlock it. > > I would recommend you start out with "The Mythical Man Month", and > continue with pretty much anything Kernighan has written on the > subject of programming. I have read many discussions on the book in question, and am quite familiar with the writing of Kernighan and Ritchie. They are well written authors on the C language. Their methodologies are also outdated. Their book a on C is over 20 years old at this point. Obviously good information doesn't expire, but a lot of good things have been learned since then. I am not interested in playing knowledge based games. Programming is not a trivia game; it is about applying workable solutions to real world problems in an efficient manner. > At some point, you will understand what Dijkstra is talking about here: > > http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html No doubt this is a well written piece that bears a response of its own. I am not going to respond to this link with any detail at the moment, because you haven't bothered to explain the purpose of putting it here; other than to link to something more well written than your own childish attack. > Until then, you should not attempt to do anything with a computer > that could harm other people. I hardly see how answering a request for the right way to do something with the appropriate correct way is something that will harm. It is up to the reader to decide what method they which to use. Also, I am concerned with your lack of confidence in other users of Varnish. I think that there are many learned users of it, and a good number of them are quite capable of taking my hash table suggestion and making it a usable reality. Once it is a reality it could easily be used by other less experienced users of Varnish. How is having an open discussion about an efficient solution to a recurring problem harmful? > And now: Please shut up before I mock you. If you wish to mock; feel free. I would prefer if you send me a direct email and do not send such nonsense to the list, nor to other uninvolved parties. > Poul-Henning > From dhelkowski at sbgnet.com Tue Mar 8 14:20:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:20:01 -0500 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: <4D762D01.3030209@sbgnet.com> First off, I would like to thank Per Buer for pointing out that I am off by a factor of 1000 in the following statements. I have corrected for that below so that my statements are more clear. My mistake was in considering modern processors as 2 megahertz instead of 2 gigahertz. On 3/7/2011 1:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. Make that 10 billion. The math I am using is 5 x 2 gigahertz. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using > 100% of the > CPU just for that. A reasonable amount of CPU to consume would be 30% > ( at most ). > So; you are left with around 150k regular expression checks per second. The correct numbers here are 500 million. A regular expression compare more likely takes 40 assembly instructions, so I am going to cut this to 250 million. LIkewise, at 30%, that leads to about 80 million. > > Lets suppose there are 500 different domains. On average, you will be > doing 250 if/else > checks per call. 150k / 250 = 600. The new number is 80 million / 250 = 320k > That means that you will get, under fair conditions, a max > of about 600 hits per second. 320,000 hits per second. Obviously, no server is capable of serving up such a number. Just using regular expressions in a cascading if/then will work fine in this case. My apologies for the confusion in this regard. What I can see is a server serving around 10,000 hits per second. That would require about 30x the number of domains. You don't really want to eat up CPU usage for just if/then though, so probably at around 10x the number of domains you'd want to switch to a hash table. So; correcting my conclusion; if you are altering configuration for 5000 domains, then you are going to need a hash table. Otherwise you are going to be fine just using a cascading if/then, despite it being ugly. > The person asking the question likely has 500 domains running. > That gives a little over 1 hit possible per second per domain. Do you > think that is an acceptable > solution for this person? I think not. > > Compare it to a hash lookup. A hash lookup, using a good minimal > perfect hashing algorithms, > will take at most around 10 operations. Using the same math as above, > that gives around 300k > lookups per second. A hash would be roughly 500 times faster than > using if/else... Note that despite my being off by a factor of 1000, the multiplication still holds out. If you use a hash table, even with only 500 domains, a hash table will -still- be 500 times faster. I still think it would be great to have a hash table solution available for use in VCL. > > On 3/7/2011 1:35 PM, Per Buer wrote: >> Hi, >> >> On Sun, Mar 6, 2011 at 11:39 PM, AD > > wrote: >> >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? >> >> >> CPU and memory bandwidth is abundant on modern servers. I'm actually >> not sure that having a 500 entries long if/else statement will hamper >> performance at all. Remember, there will be no system calls. I would >> guess a modern server will execute at least a four million >> regex-based if/else per second per CPU core if most of the code and >> data will be in the on die cache. So executing 500 matches should >> take about 0.5ms. >> >> It might not make sense to optimize this. >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Tue Mar 8 15:01:03 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 15:01:03 +0100 Subject: Lots of configs In-Reply-To: <4D762933.5080905@sbgnet.com> (David Helkowski's message of "Tue, 08 Mar 2011 08:03:47 -0500") References: <39035.1299569948@critter.freebsd.dk> <4D762933.5080905@sbgnet.com> Message-ID: <87d3m11zkw.fsf@qurzaw.varnish-software.com> ]] David Helkowski (if you could add a blank line between quoted text and your own addition that makes it much easier to read your replies) Hi, | This is called an Ad hominen attack. Belittling those you interact | with in no way betters your opinion. I am also not sure why this is a | response to what you quoted me on. I wrote what I did because I am | actually curious if someone has time and effort to get hash tables | working in VCL. I would to see a working rendition of it. I didn't | really spend much time attempting to make it work, because my own | usage of VCL didn't end up requiring it. We'll probably end up implementing hash tables in a vmod at some point, but it's not anywhere near the top of the todo list. What we've been discussing so far would probably not have been useful for your use case above, though. As for doing 3-500 regex or string matches per request that's hardly a big problem for us as Per's numbers demonstrate. cheers, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From phk at phk.freebsd.dk Tue Mar 8 15:53:20 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 14:53:20 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:27:29 +0400." Message-ID: <88534.1299596000@critter.freebsd.dk> In message , Jona than DeMello writes: >Poul simply comes across as a nervous child, throwing every superiority >imposing cliche out there because he thought a team member was >'threatened'. I received a couple of complaints about flames (on and off list) originating from David, and after reading his contribution, decided that he was not worth the bother, and decided to call his bullshit and get it over with. "Jump Tables" was a very neat concept, about 25-30 years ago, when people tried to squeeze every bit of performance out of a 4.77MHz i8088 chip in a IBM PC. They are however just GOTO in disguise and they have all the disadvantages of GOTO, without, and this is important: without _any_ benefits at all on a modern pipelined and deeply cache starved CPU. That's why I pointed David at Dijkstra epistle and other literature for building moral character as a programmer. If David had come up with a valid point or a good suggestion, then I would possibly tolerate a minimum of behavioural problems from him. But suggesting we abandon 50 years of progress towards structured programming, and use GOTOs to solve a nonexistant problem, for which there are perfectly good and sensible methods, should it materialize, just because he saw a neat trick in an old book and wanted to show of his skillz, earns him no right to flame people in this project. And that's the end of that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From chris.shenton at nasa.gov Tue Mar 8 18:11:39 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Tue, 8 Mar 2011 11:11:39 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: On Mar 7, 2011, at 4:48 PM, Jim Hayter wrote: > In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: > > /* determine vhost name w/out port number */ > set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); > set req.http.host = req.http.newhost; > > This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. This looks like it does exactly what we need. I thought I was going have to monkey with server.port, or what the vcl_hash includes in its key calculation, but this is straight-forward. Thanks a lot, Jim. From dhelkowski at sbgnet.com Tue Mar 8 20:50:57 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 14:50:57 -0500 Subject: Lots of configs In-Reply-To: <88534.1299596000@critter.freebsd.dk> References: <88534.1299596000@critter.freebsd.dk> Message-ID: <4D7688A1.8020202@sbgnet.com> On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > In message, Jona > than DeMello writes: > >> Poul simply comes across as a nervous child, throwing every superiority >> imposing cliche out there because he thought a team member was >> 'threatened'. > I received a couple of complaints about flames (on and off list) > originating from David, and after reading his contribution, decided > that he was not worth the bother, and decided to call his bullshit > and get it over with. I will admit to writing one email angrily responding to Per Buer. My anger was due primarily to the statement "if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it." The contents of that private email essentially boiled to me saying, in many more words: "not everyone is as stupid as you". Now, I agree that was distasteful, but it isn't much different than you stating you are 'calling my bullshit'. I am not quite sure why you, and others, have decided that this is a pissing match. Also; if it helps anything; I apologize for my ranting email to Per Buer. It was certainly over the line. I am sorry for going off on that. I have my reasons but I would still like to have a meaningful discussion. Per Buer, the very person I ticked off, admitted that a hash lookup is certainly faster. Other people are expressing interested in having a hash system in place with VCL. I myself am even willing to write the system. Sure I may be obnoxious at times in my presentation of what I want done, but I hardly thing it calls for your response or arrogant counter-attitude. > "Jump Tables" was a very neat concept, about 25-30 years ago, when > people tried to squeeze every bit of performance out of a 4.77MHz > i8088 chip in a IBM PC. Jump tables, and gotos, are still perfectly usable on modern system. Good techniques, in their proper place, don't expire. Hash tables for instance certainly have not been replaced by cascading 'if else' structures. Note that I am suggesting hash tables combined with jump tables. I don't see any legitimate objection to such an idea. > They are however just GOTO in disguise and they have all the > disadvantages of GOTO, without, and this is important: without _any_ > benefits at all on a modern pipelined and deeply cache starved CPU. So we should continue using cascading 'if else'? That is _very_ efficient on modern CPU architecture? ... > That's why I pointed David at Dijkstra epistle and other literature > for building moral character as a programmer. Yeah... speaking of that; I read the beginning of the article at the very least. It immediately starts talking about code elegance and the purity of solutions. If anything, it leans very heavily towards hash tables as opposed to long cascading 'if else'. > If David had come up with a valid point or a good suggestion, then > I would possibly tolerate a minimum of behavioural problems from him. How is 'can we please use hash tables' not a valid point and suggestion? > But suggesting we abandon 50 years of progress towards structured > programming, and use GOTOs to solve a nonexistant problem, for which > there are perfectly good and sensible methods, should it materialize, > just because he saw a neat trick in an old book and wanted to show > of his skillz, earns him no right to flame people in this project. Perfectly good and sensible methods such as what? 500 cascading 'if else' for each call? Are you seriously suggesting that is a technique honed to perfection in the last 50 years that is based on structured programming? I read about jump tables and hashing many many years ago. It is hardly a neat trick I recently dug out of an old book. Let me ask you this: have you heard of Bob Jenkins? Would you say his analysis of hash tables is outdated and meaningless? In regard to showing off skills; I could really care less what you or anyone else think of my coding skills. I responded to the initial question because I wanted to honestly point people towards a better solution to a recurring problem that has been mentioned in the list. Your last statement implies people can 'earn' the right to flame. ? Is that what you are doing? Using your 'earned' right to flame me? > And that's the end of that. Having the last word is something given to the victor. Arbitrarily declaring your statements to be the last word is pretty arrogant. > Poul-Henning > From drew.smathers at gmail.com Tue Mar 8 21:34:48 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 15:34:48 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:52 PM, Drew Smathers wrote: > On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: [snip] >> >> That's "Saint Mode" - please >> see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode >> I see that there isn't too much details on the semantics there. I'll see if >> I can add some details. > > Hi Per, > > I actually tried using saintmode for this problem but one point that I > found tricky is that saintmode (as far as i can tell from docs) can > only be set on beresp. If the backend is up, that's great because I > can check a non-200 status in vcl_fetch() and set. But in the case of > all backends being down, vcl_fetch() doesn't even get invoked and > there isn't any other routine and object in the routine's execution > context (that I know of) where I can set saintmode and restart. > Sorry to bump my own thread, but does anyone know of a way to set saintmode if a backend is down, vs. up and misbehaving (returning 500, etc)? Also, I added a backend probe and this indeed caused grace to kick in once the probe determined the backend as sick.I think the docs should be clarified if this isn't a bug (grace not working without probe): http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Finally it's somewhat disconcerting that in the interim between a cache expiry and before varnish determines a backend as down (sick) it will 503 - so this could affect many clients during that window. Ideally, I'd like to successfully service requests if there's an object in the cache - period - but I guess this isn't possible now with varnish? Thanks, -Drew From ronan at iol.ie Tue Mar 8 21:38:08 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 20:38:08 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs Message-ID: I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 From perbu at varnish-software.com Tue Mar 8 21:51:55 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 8 Mar 2011 21:51:55 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: Hi Drew, list. On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers wrote: > Sorry to bump my own thread, but does anyone know of a way to set > saintmode if a backend is down, vs. up and misbehaving (returning 500, > etc)? > > Also, I added a backend probe and this indeed caused grace to kick in > once the probe determined the backend as sick.I think the docs should > be clarified if this isn't a bug (grace not working without probe): > > http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Check out the trunk version of the docs. Committed some earlier today. > Finally it's somewhat disconcerting that in the interim between a > cache expiry and before varnish determines a backend as down (sick) it > will 503 - so this could affect many clients during that window. > Ideally, I'd like to successfully service requests if there's an > object in the cache - period - but I guess this isn't possible now > with varnish? > Actually it is. In the docs there is a somewhat dirty trick where set a marker in vcl_error, restart and pick up on the error and switch backend to one that is permanetly down. Grace kicks in and serves the stale content. Sometime post 3.0 there will be a refactoring of the whole vcl_error handling and we'll end up with something a bit more elegant. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Tue Mar 8 22:54:57 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 8 Mar 2011 16:54:57 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> I would look at setting a fail director. Restart if there is a 503, and if restarts > 0 select the patient director with very generous health checking. Your timeouts are reasonable, but try .timeout 20s and .threshold 1 for the patient director. Having a different view of the backends usually deals with occasional 503s. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Tue Mar 8 22:56:51 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 8 Mar 2011 13:56:51 -0800 Subject: Lots of configs In-Reply-To: <4D7688A1.8020202@sbgnet.com> References: <88534.1299596000@critter.freebsd.dk> <4D7688A1.8020202@sbgnet.com> Message-ID: An O(1) solution (e.g., a hash table) is a perfectly valid optimization of an O(N) solution. But you are confusing an O(N) solution with an O(N) problem. If the O(N) solution *in actual bona fide reality *becomes a problem for someone's use-case, I'm sure that an O(1) solution can be implemented as necessary. If *enough *someones need this O(1) solution, then it will begin to show up on this project's official radar as a potential built-in VCL feature or vmod. It's that simple. If anyone else here wants to continue pettifogging with you, please let them elect to email you directly, rather than sharing this debate with those of us who don't. It will substantiate the character and experience that you profess to have. Cheers, -- kb On Tue, Mar 8, 2011 at 11:50, David Helkowski wrote: > On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > >> In message, >> Jona >> than DeMello writes: >> >> Poul simply comes across as a nervous child, throwing every superiority >>> imposing cliche out there because he thought a team member was >>> 'threatened'. >>> >> I received a couple of complaints about flames (on and off list) >> originating from David, and after reading his contribution, decided >> that he was not worth the bother, and decided to call his bullshit >> and get it over with. >> > > I will admit to writing one email angrily responding to Per Buer. > My anger was due primarily to the statement "if you got to deploy a whole > bunch of scary inline C that will seriously intimidate the summer intern and > makes all the other fear the config it's just not worth it." > > The contents of that private email essentially boiled to me saying, in many > more words: > "not everyone is as stupid as you". > Now, I agree that was distasteful, but it isn't much different than you > stating you are > 'calling my bullshit'. > I am not quite sure why you, and others, have decided that this is a > pissing match. > > Also; if it helps anything; I apologize for my ranting email to Per Buer. > It was certainly > over the line. I am sorry for going off on that. I have my reasons but I > would still like > to have a meaningful discussion. > > Per Buer, the very person I ticked off, admitted that a hash lookup is > certainly faster. > Other people are expressing interested in having a hash system in place > with VCL. > I myself am even willing to write the system. > Sure I may be obnoxious at times in my presentation of what I want done, > but I hardly > thing it calls for your response or arrogant counter-attitude. > > "Jump Tables" was a very neat concept, about 25-30 years ago, when >> people tried to squeeze every bit of performance out of a 4.77MHz >> i8088 chip in a IBM PC. >> > > Jump tables, and gotos, are still perfectly usable on modern system. Good > techniques, > in their proper place, don't expire. Hash tables for instance certainly > have not been > replaced by cascading 'if else' structures. > > Note that I am suggesting hash tables combined with jump tables. I don't > see any > legitimate objection to such an idea. > > They are however just GOTO in disguise and they have all the >> disadvantages of GOTO, without, and this is important: without _any_ >> benefits at all on a modern pipelined and deeply cache starved CPU. >> > > So we should continue using cascading 'if else'? That is _very_ efficient > on modern > CPU architecture? ... > > That's why I pointed David at Dijkstra epistle and other literature >> for building moral character as a programmer. >> > > Yeah... speaking of that; I read the beginning of the article at the very > least. It immediately starts > talking about code elegance and the purity of solutions. If anything, it > leans very > heavily towards hash tables as opposed to long cascading 'if else'. > > If David had come up with a valid point or a good suggestion, then >> I would possibly tolerate a minimum of behavioural problems from him. >> > > How is 'can we please use hash tables' not a valid point and suggestion? > > But suggesting we abandon 50 years of progress towards structured >> programming, and use GOTOs to solve a nonexistant problem, for which >> there are perfectly good and sensible methods, should it materialize, >> just because he saw a neat trick in an old book and wanted to show >> of his skillz, earns him no right to flame people in this project. >> > Perfectly good and sensible methods such as what? 500 cascading 'if else' > for each > call? Are you seriously suggesting that is a technique honed to perfection > in the last > 50 years that is based on structured programming? > > I read about jump tables and hashing many many years ago. It is hardly a > neat trick > I recently dug out of an old book. Let me ask you this: have you heard of > Bob Jenkins? > Would you say his analysis of hash tables is outdated and meaningless? > > In regard to showing off skills; I could really care less what you or > anyone else think > of my coding skills. I responded to the initial question because I wanted > to honestly > point people towards a better solution to a recurring problem that has been > mentioned > in the list. > > Your last statement implies people can 'earn' the right to flame. ? Is that > what you are > doing? Using your 'earned' right to flame me? > > And that's the end of that. >> > > Having the last word is something given to the victor. Arbitrarily > declaring your statements > to be the last word is pretty arrogant. > >> Poul-Henning >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Tue Mar 8 23:06:41 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Tue, 8 Mar 2011 22:06:41 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, > > wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Tue Mar 8 23:11:39 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Tue, 8 Mar 2011 16:11:39 -0600 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <8371073562863633333@unknownmsgid> On 8 Mar 2011, at 16:07, Ben Dodd wrote: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, < varnish-misc-request at varnish-cache.org> wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ------------------------------ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Whilst this may not be a fix to a possible bug in varnish have you tried switching posts to pipe instead of pass? Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Tue Mar 8 23:32:29 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:32:29 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <8371073562863633333@unknownmsgid> References: <8371073562863633333@unknownmsgid> Message-ID: On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan From phk at phk.freebsd.dk Tue Mar 8 23:36:18 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 22:36:18 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 14:50:57 EST." <4D7688A1.8020202@sbgnet.com> Message-ID: <7141.1299623778@critter.freebsd.dk> In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ronan at iol.ie Tue Mar 8 23:42:04 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:42:04 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> Message-ID: Hi Stefan, On Tue, 8 Mar 2011, Caunter, Stefan wrote: > I would look at setting a fail director. Restart if there is a 503, and > if restarts > 0 select the patient director with very generous health > checking. Your timeouts are reasonable, but try .timeout 20s and > .threshold 1 for the patient director. Having a different view of the > backends usually deals with occasional 503s. Thanks for your email. Unfortunately as a varnish newbie most of it went right over my head. Are you suggesting I make changes to the health check probes to see if they will up/down backends more aggressively? I would be surprised if there are underlying health issues with the back end. The site has been running fine under everything but the heaviest of loads using pound as the front end for the past couple of years, and the backend log entries I've looked at suggest that apache is processing the POSTs fine, it's varnish that's returning the error. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 3:38 PM > To: varnish-misc at varnish-cache.org > Subject: Varnish 503ing on ~1/100 POSTs > > I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin > website. > The current load is light (up to a couple of thousand active sessions, > peak is around five thousand). Varnish has a fairly simple config with > a director consisting of two Apache backends: > > backend backend1 { > .host = "1.2.3.4"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 1.2.3.4" > "Connection: close"; > } > } > > backend backend2 { > .host = "5.6.7.8"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 5.6.7.8" > "Connection: close"; > } > } > > The numbers are modest, but significant - about 1 POST in a hundred > fails. > I've upped the backend timeouts to 90 seconds (first_byte / > between_bytes) > and I'm pretty confident they're responding in well under that time. > > varnishlog does not show any backend health changes. A typical event > looks like: > > Varnish: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 > 2623 > > Backend: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 > > The POST appears to work fine on the backend but the user gets a 503 > from > Varnish. It's not unusual to see users getting the error several times > in > a row (presumably re-submitting the post): > > a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > > A typical request is below. The first attempt fails with: > > 33 FetchError c http first read error: -1 0 (Success) > > there is presumably a restart and the second attempt (sometimes to > backend1, sometimes backend2) fails with: > > 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > > This pattern has been the same on the few transactions I've examined in > detail. The full log output of a typical request is below. > > I'm stumped. Has anybody got any ideas what might be causing this? > > > -Ronan > > > 33 RxRequest c POST > 33 RxURL c /ajax.php > 33 RxProtocol c HTTP/1.1 > 33 RxHeader c Accept: */* > 33 RxHeader c Accept-Language: nl-be > 33 RxHeader c Referer: http://www.redcafe.net/ > 33 RxHeader c x-requested-with: XMLHttpRequest > 33 RxHeader c Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 33 RxHeader c Accept-Encoding: gzip, deflate > 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) > 33 RxHeader c Host: www.sitename.net > 33 RxHeader c Content-Length: 82 > 33 RxHeader c Connection: Keep-Alive > 33 RxHeader c Cache-Control: no-cache > 33 RxHeader c Cookie: ... > 33 VCL_call c recv > 33 VCL_return c pass > 33 VCL_call c hash > 33 VCL_return c hash > 33 VCL_call c pass > 33 VCL_return c pass > 33 Backend c 44 backend backend1 > 44 TxRequest b POST > 44 TxURL b /ajax.php > 44 TxProtocol b HTTP/1.1 > 44 TxHeader b Accept: */* > 44 TxHeader b Accept-Language: nl-be > 44 TxHeader b Referer: http://www.sitename.net/ > 44 TxHeader b x-requested-with: XMLHttpRequest > 44 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 44 TxHeader b Host: www.sitename.net > 44 TxHeader b Content-Length: 82 > 44 TxHeader b Cache-Control: no-cache > 44 TxHeader b Cookie: ... > 44 TxHeader b Accept-Encoding: gzip > 44 TxHeader b X-Forwarded-For: a.b.c.d > 44 TxHeader b X-Varnish: 657185708 > * 33 FetchError c http first read error: -1 0 (Success) > 44 BackendClose b backend1 > 33 Backend c 47 backend backend2 > 47 TxRequest b POST > 47 TxURL b /ajax.php > 47 TxProtocol b HTTP/1.1 > 47 TxHeader b Accept: */* > 47 TxHeader b Accept-Language: nl-be > 47 TxHeader b Referer: http://www.sitename.net/ > 47 TxHeader b x-requested-with: XMLHttpRequest > 47 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 47 TxHeader b Host: www.sitename.net > 47 TxHeader b Content-Length: 82 > 47 TxHeader b Cache-Control: no-cache > 47 TxHeader b Cookie: ... > 47 TxHeader b Accept-Encoding: gzip > 47 TxHeader b X-Forwarded-For: a.b.c.d > 47 TxHeader b X-Varnish: 657185708 > * 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > 47 BackendClose b backend2 > 33 VCL_call c error > 33 VCL_return c deliver > 33 VCL_call c deliver > 33 VCL_return c deliver > 33 TxProtocol c HTTP/1.1 > 33 TxStatus c 503 > 33 TxResponse c Service Unavailable > 33 TxHeader c Server: Varnish > 33 TxHeader c Retry-After: 0 > 33 TxHeader c Content-Type: text/html; charset=utf-8 > 33 TxHeader c Content-Length: 2623 > 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT > 33 TxHeader c X-Varnish: 657185708 > 33 TxHeader c Age: 3 > 33 TxHeader c Via: 1.1 varnish > 33 TxHeader c Connection: close > 33 Length c 2623 > 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 > 0.000037670 2.887368441 0.000037193 > 33 SessionClose c error > 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From dhelkowski at sbgnet.com Wed Mar 9 00:09:59 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 18:09:59 -0500 (EST) Subject: Lots of configs In-Reply-To: <7141.1299623778@critter.freebsd.dk> Message-ID: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Please refrain from continuing to message the list on this topic. I will not do so either, provided you stop sending things like 'David is wrong, and his ideas should never be considered' to the list. It is entirely childish, and I am sure people are sick of seeing this sort of garbage in the list. My only response to this latest attack is that Varnish is open source software. I can and will publish a how-to on using hashing in the manner that I have described. There is nothing that you can do to stop it, and I am sure people will take advantage of it. ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 5:36:18 PM Subject: Re: Lots of configs In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Mar 9 00:45:57 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 23:45:57 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:09:59 EST." <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <7556.1299627957@critter.freebsd.dk> In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >Please refrain from continuing to message the list on this topic. I prefer the archives show the full exchange, should any of your future potential employers google your name. If you do not like that, then you should think carefully about what you post in public. >My only response to this latest attack is that Varnish is open >source software. I can and will publish a how-to on using hashing >in the manner that I have described. There is nothing that you can >do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jhalfmoon at milksnot.com Wed Mar 9 00:50:45 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Wed, 09 Mar 2011 00:50:45 +0100 Subject: Lots of configs In-Reply-To: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <4D76C0D5.9030809@milksnot.com> On 03/09/2011 12:09 AM, David Helkowski wrote: > Please refrain from continuing to message the list on this topic. > I will not do so either, provided you stop sending things like Are you proposing some kind of 'hushing' algorithm? > 'David is wrong, and his ideas should never be considered' to the list. > It is entirely childish, and I am sure people are sick of seeing this > sort of garbage in the list. > > My only response to this latest attack is that Varnish is open > source software. I can and will publish a how-to on using hashing > in the manner that I have described. There is nothing that you can > do to stop it, and I am sure people will take advantage of it. > From geoff at uplex.de Wed Mar 9 00:58:18 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 09 Mar 2011 00:58:18 +0100 Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> <4D76C0D5.9030809@milksnot.com> Message-ID: <4D76C29A.1030502@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 3/9/11 12:50 AM, Johnny Halfmoon wrote: > > Are you proposing some kind of 'hushing' algorithm? Some threads fail silently, whereas other fail verbosely. - -- UPLEX Systemoptimierung Schwanenwik 24 22087 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNdsKZAAoJEOUwvh9pJNUR6UUQAKICKY6d76uO3NIDOtoQm7y2 AjqhL9M12lewkF3zWTfBnZjiiOSnifIXcdgBIipOE/VgkUIdxTO5PprIB9Zw/IYF YoDlHyqZxpdZvSFFeMaxR/hG08RQCaT3bQ7DQaX6XEM7hO5dYaYNY7Se9SPfQoIJ sOn/W/+UtQMZokhc1onXWp59ePIgZAUulqzdtDMmTBt51RXnyDLwvgiYAwOeCpUs t1/BW6tZ+Oc6F5MvtcLdN2z/8xYEcwyFgNCh1xaqHoytu/6VPmIWEubl3ATStMM1 BDf6Qa3CUCoDiWqEhb6iU3jCMVhVQRYfKku5uXL9kreV+Ilki6egTpVy8T9Q4AfI 2VZJuriQnsLWJn5gU8Ue2Ax1t3Pi5VKD/EOD3OdTLzfLGb53AtVHtj7QsI2EOqpr /KYnbylVfVv15luhm9NFyHF6yt3yJ2Ox8LqXu4RGCJ9iKwAdjOmHpNi75yNadRj1 nzoxlMBPt+56+8yfjpbfndFY7GdBeW8H7sOCl4L9fTjwo087mGjEZQgertpMpujs c/1BvxOFvpzUVFCbYzEYFXaKz1o+pVzONev03S4praOyUMjRcuWaGU9anIO0w5cO ue8kY21o5lYPpkpmYUud+X1oECnMkHToOUmqDh6avno14vB/IrvaRqDWEn/VtsYh cMNzqqwbBL4hi8FHS6J+ =bbhj -----END PGP SIGNATURE----- From drew.smathers at gmail.com Wed Mar 9 01:23:19 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 19:23:19 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 3:51 PM, Per Buer wrote: > Hi Drew, list. > On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers > wrote: >> >> Sorry to bump my own thread, but does anyone know of a way to set >> saintmode if a backend is down, vs. up and misbehaving (returning 500, >> etc)? >> >> Also, I added a backend probe and this indeed caused grace to kick in >> once the probe determined the backend as sick.I think the docs should >> be clarified if this isn't a bug (grace not working without probe): >> >> http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers > > Check out the trunk version of the docs. Committed some?earlier?today. > Thanks, I see a lot is getting >> >> Finally it's somewhat disconcerting that in the interim between a >> cache expiry and before varnish determines a backend as down (sick) it >> will 503 - so this could affect many clients during that window. >> Ideally, I'd like to successfully service requests if there's an >> object in the cache - period - but I guess this isn't possible now >> with varnish? > > Actually it is. In the docs there is a somewhat dirty trick where set a > marker in vcl_error, restart and pick up on the error and switch backend to > one that is permanetly down. Grace kicks in and serves the stale content. > Sometime post 3.0 there will be a refactoring of the whole vcl_error > handling and we'll end up with something a bit more elegant. > Well a dirty trick is good enough if makes a paying customer for me. :P This is working perfectly now. I would suggest giving an example of "magic marker" mentioned in the document which mentions the trick (http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html). Here's a stripped down version of my VCL incorporating the trick: backend webapp { .host = "127.0.0.1"; .port = "8000"; .probe = { .url = "/hello/"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } /* A backend that will always fail. */ backend failapp { .host = "127.0.0.1"; .port = "9000"; .probe = { .url = "/hello/"; .interval = 12h; .timeout = 1s; .window = 1; .threshold = 1; } } sub vcl_recv { if (req.http.X-Varnish-Error == "1") { set req.backend = failapp; unset req.http.X-Varnish-Error; } else { set req.backend = webapp; } if (! req.backend.healthy) { set req.grace = 24h; } else { set req.grace = 1m; } } sub vcl_error { if ( req.http.X-Varnish-Error != "1" ) { set req.http.X-Varnish-Error = "1"; return (restart); } } sub vcl_fetch { set beresp.grace = 24h; } From dhelkowski at sbgnet.com Wed Mar 9 01:48:30 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:48:30 -0500 (EST) Subject: Lots of configs In-Reply-To: <7556.1299627957@critter.freebsd.dk> Message-ID: <978396342.836068.1299631710854.JavaMail.root@mail-01.sbgnet.com> ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 6:45:57 PM Subject: Re: Lots of configs In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >>Please refrain from continuing to message the list on this topic. >I prefer the archives show the full exchange, should any of your >future potential employers google your name. Once again; this is pretty rude. My point is not to waste people's energy reading this, not to attempt to hide anything. At a previous point in my past; I had my entire diary posted on the internet; over 1 million words. You won't find that I am the sort of person's who attempt to hide anything. >If you do not like that, then you should think carefully about >what you post in public. I agree with that, but I think that you are responsible for how you treat or abuse others in public. If you are in a position of authority and knowledge you should treat those beneath you well; not mock them. >>My only response to this latest attack is that Varnish is open >>source software. I can and will publish a how-to on using hashing >>in the manner that I have described. There is nothing that you can >>do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dhelkowski at sbgnet.com Wed Mar 9 01:51:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:51:01 -0500 (EST) Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> Message-ID: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> ---- Original Message ----- From: "Johnny Halfmoon" To: "David Helkowski" Cc: "Poul-Henning Kamp" , varnish-misc at varnish-cache.org, "Jonathan DeMello" Sent: Tuesday, March 8, 2011 6:50:45 PM Subject: Re: Lots of configs On 03/09/2011 12:09 AM, David Helkowski wrote: >> Please refrain from continuing to message the list on this topic. >> I will not do so either, provided you stop sending things like >Are you proposing some kind of 'hushing' algorithm? I posted this request at the suggestion of a 3rd party; because I did not wish to waste people's time. Seeing as PHK is essentially the authority and controller of the list; I am going to continue responding as appropriate unless I am directed not to by PHK. >> 'David is wrong, and his ideas should never be considered' to the list. >> It is entirely childish, and I am sure people are sick of seeing this >> sort of garbage in the list. >> >> My only response to this latest attack is that Varnish is open >> source software. I can and will publish a how-to on using hashing >> in the manner that I have described. There is nothing that you can >> do to stop it, and I am sure people will take advantage of it. > From straightflush at gmail.com Wed Mar 9 03:06:30 2011 From: straightflush at gmail.com (AD) Date: Tue, 8 Mar 2011 21:06:30 -0500 Subject: Lots of configs In-Reply-To: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> References: <4D76C0D5.9030809@milksnot.com> <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> Message-ID: As the OP, i would like to get the discussion on this thread back to something useful. That being said... Assuming there was an O(1) (or some ideal) mechanism to lookup req.host and map it to a custom function, i notice that i get the error "Unused function custom_host" if there is not an explicit call in the VCL. Aside from having a dummy subroutine that listed all the "calls", is there a cleaner way to deal with this? I am also going to take a stab at making this a module, i already did this with an md5 function so I think that will solve the "pre-loading" problem. Adam On Tue, Mar 8, 2011 at 7:51 PM, David Helkowski wrote: > ---- Original Message ----- > From: "Johnny Halfmoon" > To: "David Helkowski" > Cc: "Poul-Henning Kamp" , > varnish-misc at varnish-cache.org, "Jonathan DeMello" < > demello.itp at googlemail.com> > Sent: Tuesday, March 8, 2011 6:50:45 PM > Subject: Re: Lots of configs > > > On 03/09/2011 12:09 AM, David Helkowski wrote: > >> Please refrain from continuing to message the list on this topic. > >> I will not do so either, provided you stop sending things like > > > >Are you proposing some kind of 'hushing' algorithm? > > I posted this request at the suggestion of a 3rd party; because I did > not wish to waste people's time. Seeing as PHK is essentially the authority > and controller of the list; I am going to continue responding as > appropriate > unless I am directed not to by PHK. > > >> 'David is wrong, and his ideas should never be considered' to the list. > >> It is entirely childish, and I am sure people are sick of seeing this > >> sort of garbage in the list. > >> > >> My only response to this latest attack is that Varnish is open > >> source software. I can and will publish a how-to on using hashing > >> in the manner that I have described. There is nothing that you can > >> do to stop it, and I am sure people will take advantage of it. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Mar 9 09:18:52 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 08:18:52 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 21:06:30 EST." Message-ID: <9583.1299658732@critter.freebsd.dk> In message , AD w rites: >As the OP, i would like to get the discussion on this thread back to >something useful. That being said... Arthur and I brainstormed this issue on our way to cake after VUG3 and a couple of ideas came up which may be worth looking at. At the top-heavy end, is having VCL files tell which domains they apply to, possibly something like: domains { "*.example.com"; ! "notthisone.example.com"; "*.example.biz"; } There are a large number of "what happens if I then do..." questions that needs answered sensibly to make that work, but I think it is doable and worthwhile. The next one we talked about is letting backend declarations declare which domains they apply to, pretty much same syntax as above, now just inside a backend. This would modify the current default backend selection and nothing more. There needs to be some kind of "matched no backend" handling. And finally, since most users with massive domains will need or want to reload VCL for trivial addition/removals of domains, somebody[TM] should probably write a VMOD which looks a domain up in a suitable database file (db/dbm/whatever) There are many ways we can mould and modify these ideas, and I invite you to hash out which way you would prefer it work... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From david at firechaser.com Wed Mar 9 09:41:14 2011 From: david at firechaser.com (David Murphy) Date: Wed, 9 Mar 2011 09:41:14 +0100 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: >And finally, since most users with massive domains will need or want >to reload VCL for trivial addition/removals of domains, somebody[TM] >should probably write a VMOD which looks a domain up in a suitable >database file (db/dbm/whatever) I was wondering, is there any way for us to be able to run an external lookup that can form part of decision-making in VCL. For example, a file or db lookup to see if a value is true/false and that will determine which sections of VCL code run? A real-world example is where we have a waiting room feature that aims to limit traffic reaching a payment portal. When the waiting room is on we'd like Varnish to hold onto the traffic. When turned off we would then forward the requests hitting VCL to the payment system. Currently we are doing this in the backend with a PHP / MySQL lookup and it works, but it's far from ideal. Perhaps a better way would be to pass in the true/false value as a command line arg to Varnish as a 'reload' rather than restart (similar to Apache, I guess) so we don't lose connections. Would also mean that no lookups are required per request. The waiting room state changes on/off only a few times a day. Not sure if this is possible or even desirable but would appreciate your thoughts/suggestions. Thanks, David From phk at phk.freebsd.dk Wed Mar 9 10:01:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 09:01:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Wed, 09 Mar 2011 09:41:14 +0100." Message-ID: <9904.1299661268@critter.freebsd.dk> In message , Davi d Murphy writes: >>And finally, since most users with massive domains will need or want >>to reload VCL for trivial addition/removals of domains, somebody[TM] >>should probably write a VMOD which looks a domain up in a suitable >>database file (db/dbm/whatever) > >I was wondering, is there any way for us to be able to run an external >lookup that can form part of decision-making in VCL. For example, a >file or db lookup to see if a value is true/false and that will >determine which sections of VCL code run? Writing a VMOD that does that shouldn't be hard, we just need to find somebody with a pocket full of round tuits. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From paul.lu81 at gmail.com Tue Mar 8 18:50:44 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Tue, 8 Mar 2011 09:50:44 -0800 Subject: A lot of if statements to handle hostnames In-Reply-To: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: Primarily just to make the code cleaner and a little concerned if I have a lot of hostnames. 100 for example. Having to potentially traverse several if statements for each request seems inefficient to me. Thank you, Paul On Mon, Mar 7, 2011 at 11:32 PM, Indranil Chakravorty < indranilc at rediff-inc.com> wrote: > Apart from improving the construct to if ... elseif , could you please tell > me the reason why you are looking for a different way? Is it only for ease > of writing less statements or is there some other reason you foresee? I am > asking because we also have a number of similar construct in our vcl. > Thanks. > > Thanks, > Neel > > On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu wrote > > >Hi, > > > >I have to work with a lot of domain names in my varnish config and I was > wondering if there is an easier to way to match the hostname other than a > series of if statements. Is there anything like a hash? Or does anybody have > any C code to do this? > > > >example pseudo code: > >================================= > >vcl_recv(){ > > > > if(req.http.host == "www.domain1.com") > > { > > set req.backend = www_domain1_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain2.com") > > { > > set req.backend = www_domain2_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain3.com") > > { > > set req.backend = www_domain3_com; > > # more code > > return(lookup); > > } > >} > >================================= > > > >Thank you, > >Paul > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 9 10:13:18 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 9 Mar 2011 10:13:18 +0100 Subject: A lot of if statements to handle hostnames In-Reply-To: References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: On Tue, Mar 8, 2011 at 6:50 PM, Paul Lu wrote: > Primarily just to make the code cleaner and a little concerned if I have a > lot of hostnames. 100 for example. Having to potentially traverse several > if statements for each request seems inefficient to me. > Don't worry about it. I think we've clearly established that it isn't (in a parallel thread). -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Wed Mar 9 13:05:31 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Wed, 9 Mar 2011 12:05:31 +0000 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: <69A34F57-3B91-41B0-8DD8-49191E80E268@gmail.com> On 9 Mar 2011, at 08:18, Poul-Henning Kamp wrote: > In message , AD w > rites: > >> As the OP, i would like to get the discussion on this thread back to >> something useful. That being said... > > Arthur and I brainstormed this issue on our way to cake after VUG3 > and a couple of ideas came up which may be worth looking at. > > At the top-heavy end, is having VCL files tell which domains they > apply to, possibly something like: > > domains { > "*.example.com"; > ! "notthisone.example.com"; > "*.example.biz"; > } I was chatting to a Varnish administrator at a PHP conference in London a couple of weeks ago. They run Varnish for a very high profile site which has lots of sub-sites that have delegated web teams. So, for example, all traffic to www.example.com hits Varnish, and www.example.com/alpha is managed by a completely separate team to www.example.com/beta. Thanks to Varnish, each base path can be routed to different backends. However, the varnish behaviour itself is different for different paths. My understanding is that each team submits their VCL to the central administrator who sticks it together, and that each path/site has a separate set of vcl_* functions. Whilst I obviously don't know exactly how they're doing this, I think that this different level of behaviour splitting would be worth considering as part of these discussions. So, perhaps it might make sense to have individual VCL files that declare what they're interested in, such as: ==alpha.vcl== appliesto { "alpha"; "alpha.example.com"; "alpha.example.co.uk"; } sub vcl_recv { set req.backend = alphapool; } == and then in the main VCL, do something pseudo-code like: ==default.vcl== include "alpha.vcl" sub vcl_recv { if (req.http.host == "www.example.com") { /* Do some regex to find the first part of the path, and see if there's a valid config for it */ callconfig(reg_sub(req.url, "/(.*)(/.*)?", "\1")); } else { /* Try to see if there's a match for this hostname */ callconfig(req.http.host); } /* By this point, nothing has matched, so call some default behaviour */ callconfig("default"); } == So, callconfig effectively works a bit like the current 'return' statement, but only if a config that 'appliesto' the defined string is found in a config - once the config is called, no further code in the calling function is executed. By detaching this behaviour from the concept of a "domain" in PHK's example, then this pattern could be used for a wider range of scenarios - perhaps switching based on the requestor's IP / ACL matches or whatever else Varnish users might need. Rob From scaunter at topscms.com Wed Mar 9 16:11:05 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 9 Mar 2011 10:11:05 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <8371073562863633333@unknownmsgid> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> I don't think pass or pipe is the issue. 503 means the backend can't answer, and calling pipe won't change that. Here's an example. Set up a "patient" back end; you can collect your back ends into a patient director. backend waitalongtime { .host = "a.b.c.d"; .port = "80"; .first_byte_timeout = 60s; .between_bytes_timeout = 10s; .probe = { .url = "/areyouthere/"; .timeout = 10s; .interval = 15s; .window = 5; .threshold = 1; } } Check the number of restarts before you select a back end. Try your normal, fast director first. if (req.restarts == 0) { set req.backend = fast; } else if (req.restarts == 1) { set req.backend = waitalongtime; } else if (req.restarts == 2) { set req.backend = waitalongtime; } else { set req.backend = waitalongtime; } If you get a 503, catch it in error, and increment restart. This will select the slow back end. sub vcl_error { if (obj.status == 503 && req.restarts < 4) { restart; } } Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 5:32 PM To: Stewart Robinson Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish 503ing on ~1/100 POSTs On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ronan at iol.ie Wed Mar 9 16:38:28 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 9 Mar 2011 15:38:28 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> References: <8371073562863633333@unknownmsgid> <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> Message-ID: Hi Stephan, On Wed, 9 Mar 2011, Caunter, Stefan wrote: > I don't think pass or pipe is the issue. 503 means the backend can't > answer, and calling pipe won't change that. > Set up a "patient" back end; you can collect your back ends into a > patient director. Ah, the penny drops. I was thinking of "patient" in the context of health checks (ie a sick backend). I'll give it a go, but my gut feeling is that the backends aren't at fault. I'm seeing this error when they are both backends lightly loaded (load average around 1 on an 8 core box), and the rate of incidence does not appear to be related to the load - I actually saw a slightly lower rate (under 1%) last night when utilisation was higher, and as I said previously when I used pound instead of varnish this wasn't a problem. I'll try the patient backend and keep a close eye on the error rate vs utilisation over the next few days. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 5:32 PM > To: Stewart Robinson > Cc: varnish-misc at varnish-cache.org > Subject: Re: Varnish 503ing on ~1/100 POSTs > > On Tue, 8 Mar 2011, Stewart Robinson wrote: > > > Whilst this may not be a fix to a possible bug in varnish have you > tried > > switching posts to pipe instead of pass? > > This might well help, but I'd have no way of knowing for sure. The > backend servers indicate the requests via varnish are processed > correctly. > I'm not able to reproduce the problem at will so I'd be relying on user > feedback to determine if the problem still occurs and that's unreliable > at > best. > > It is of course better than having the problem occur, but I'd rather > take > the opportunity to try and get to the bottom of it while I can. I only > deployed varnish a couple of days ago. The site will be fairly quiet > until the end of the week. I'll resort to pipe if I've not got a fix by > then. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From nadahalli at gmail.com Wed Mar 9 23:46:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 9 Mar 2011 17:46:58 -0500 Subject: Duplicate Purges / Purge.Length size reduction Message-ID: Hello All. I have a few questions on the length of the purge.list. 1 - Is it something to be worried about? What's the optimal n_struct_object to n_active_purges ratio? 2 - If I have periodic purge adds that are adding the same URL pattern to be purged, does varnish do any internal optimization? 3 - Is it better to have a ban lurker in place to keep the purge.list length under check? -T -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Thu Mar 10 08:08:07 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Thu, 10 Mar 2011 08:08:07 +0100 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: (Ronan Mullally's message of "Tue, 8 Mar 2011 20:38:08 +0000 (GMT)") References: Message-ID: <871v2fwizs.fsf@qurzaw.varnish-software.com> ]] Ronan Mullally | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. | The current load is light (up to a couple of thousand active sessions, | peak is around five thousand). Varnish has a fairly simple config with | a director consisting of two Apache backends: This looks a bit odd: | backend backend1 { | .host = "1.2.3.4"; | .port = "80"; | .connect_timeout = 5s; | .first_byte_timeout = 90s; | .between_bytes_timeout = 90s; | A typical request is below. The first attempt fails with: | | 33 FetchError c http first read error: -1 0 (Success) This just means the backend closed the connection on us. | there is presumably a restart and the second attempt (sometimes to | backend1, sometimes backend2) fails with: | | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This is a timeout, however: | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 That 2.89s backend response time doesn't add up with your timeouts. Can you see if you can get a tcpdump of what's going on? Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From ronan at iol.ie Thu Mar 10 14:29:23 2011 From: ronan at iol.ie (Ronan Mullally) Date: Thu, 10 Mar 2011 13:29:23 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Hej Tollef, On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? I'll see what I can do. Varnish is serving an average of about 20 objects per second so there'll be a lot of data to gather / sift through. The following numbers might prove useful - they're counts of the number of successful GETs, POSTs and 503s since 17:00 yesterday. GET POST Hour 200 503 200 503 ------------------------------------------ 17:00 72885 0 (0.00%) 841 0 (0.00%) 18:00 69266 0 (0.00%) 858 6 (0.70%) 19:00 65030 0 (0.00%) 866 3 (0.35%) 20:00 70289 0 (0.00%) 975 8 (0.82%) 21:00 105767 0 (0.00%) 1214 5 (0.41%) 22:00 86236 0 (0.00%) 834 3 (0.36%) 23:00 67078 0 (0.00%) 893 2 (0.22%) 00:00 48042 0 (0.00%) 669 4 (0.60%) 01:00 35966 0 (0.00%) 479 0 (0.00%) 02:00 29598 0 (0.00%) 395 3 (0.76%) 03:00 25819 0 (0.00%) 359 0 (0.00%) 04:00 22835 0 (0.00%) 366 4 (1.09%) 05:00 24487 0 (0.00%) 315 1 (0.32%) 06:00 26583 0 (0.00%) 353 4 (1.13%) 07:00 30433 0 (0.00%) 398 2 (0.50%) 08:00 37394 0 (0.00%) 363 9 (2.48%) 09:00 44462 1 (0.00%) 526 4 (0.76%) 10:00 49891 2 (0.00%) 611 4 (0.65%) 11:00 54826 1 (0.00%) 599 7 (1.17%) 12:00 60765 6 (0.01%) 615 1 (0.16%) 13:00 18941 0 (0.00%) 190 0 (0.00%) Apart from a handful of 503s to GET requests this morning (which I've not had a chance to investigate) the problem almost exclusively affects POSTs. The frequency of the problem does not appear to be related to the load - the highest incidence does not match the busiest periods. I'll get back to you when I have a few packet traces. It will most likely be next week. FWIW, I forgot to mention in my previous posts, I'm running 2.1.5 on a Debian Lenny VM. -Ronan From allan_wind at lifeintegrity.com Thu Mar 10 16:29:18 2011 From: allan_wind at lifeintegrity.com (Allan Wind) Date: Thu, 10 Mar 2011 15:29:18 +0000 Subject: SSL Message-ID: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Is the current thinking still that SSL support will not be integrated into varnish? I found the post in the archives from last year that speaks of nginx as front-end. Has anyone looked into the other stunnel or pound and could share their experience? I cannot tell from their web site if haproxy added SSL support yet. Here is what the pound web site[1] says about stunnel: stunnel: probably comes closest to my understanding of software design (does one job only and does it very well). However, it lacks the load balancing and HTTP filtering features that I considered necessary. Using stunnel in front of Pound (for HTTPS) would have made sense, except that integrating HTTPS into Pound proved to be so simple that it was not worth the trouble. [1] http://www.apsis.ch/pound/ /Allan -- Allan Wind Life Integrity, LLC From phk at phk.freebsd.dk Thu Mar 10 16:41:00 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 10 Mar 2011 15:41:00 +0000 Subject: SSL In-Reply-To: Your message of "Thu, 10 Mar 2011 15:29:18 GMT." <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <82042.1299771660@critter.freebsd.dk> In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ es: >Is the current thinking still that SSL support will not be >integrated into varnish? Yes, that is current thinking. I can see no advantages that outweigh the disadvantages, and a realistic implementation would not be significantly different from running a separate process to do the job in the first place. http://www.varnish-cache.org/docs/trunk/phk/ssl.html -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From sime at sime.net.au Fri Mar 11 08:23:07 2011 From: sime at sime.net.au (Simon Males) Date: Fri, 11 Mar 2011 18:23:07 +1100 Subject: SSL In-Reply-To: <20110310152918.GJ1675@vent.lifeintegrity.localnet> References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind wrote: > Is the current thinking still that SSL support will not be > integrated into varnish? ?I found the post in the archives from > last year that speaks of nginx as front-end. ?Has anyone looked > into the other stunnel or pound and could share their experience? > I cannot tell from their web site if haproxy added SSL support > yet. Using pound 2.4.3 (a little dated) over here. Works well. I've found pound will throw errors in /var/log a few seconds after a Chrome connection (Connection timed out). Though this isn't reflected on the client side. Hate to crap on pound's parade, but I've also some client side errors, but they are not reproducible on demand. http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 -- Simon Males From michal.taborsky at nrholding.com Fri Mar 11 09:31:00 2011 From: michal.taborsky at nrholding.com (Michal Taborsky) Date: Fri, 11 Mar 2011 09:31:00 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79DDC4.6010606@nrholding.com> Dne 11.3.2011 8:23, Simon Males napsal(a): > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 As far as I know, Chrome uses pre-connect to improve performance. What it does, is it creates immediately more than one TCP/IP connection to the target IP address, because most pages contain images and styles and javascript, and Chrome knows, that it will be downloading these in parallel. So it saves time on handshaking, when the connections are needed later. It will also keep the connections open for quite a long time and maybe pound times out these connections when nothing is happening. This sort of thing can happen with any browser, but I think Chrome is a lot more aggressive than others, so it stands out. -- Michal T?borsk? chief systems architect Netretail Holding, B.V. http://www.nrholding.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.boer at bizztravel.nl Fri Mar 11 08:53:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Fri, 11 Mar 2011 08:53:05 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79D4E1.4060400@bizztravel.nl> We use Pound as well. It works fine. Regards, Martin On 03/11/2011 08:23 AM, Simon Males wrote: > On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind > wrote: >> Is the current thinking still that SSL support will not be >> integrated into varnish? I found the post in the archives from >> last year that speaks of nginx as front-end. Has anyone looked >> into the other stunnel or pound and could share their experience? >> I cannot tell from their web site if haproxy added SSL support >> yet. > Using pound 2.4.3 (a little dated) over here. Works well. > > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 > From lampe at hauke-lampe.de Sun Mar 13 00:31:41 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sun, 13 Mar 2011 00:31:41 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D7C025D.7060603@hauke-lampe.de> On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. Thanks again, it works! I adapted the patch for varnish 2.1.5: http://cfg.openchaos.org/varnish/patches/varnish-2.1.5-cache_restart.diff A working example can be seen here: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl Hauke. From checker at d6.com Sun Mar 13 05:28:00 2011 From: checker at d6.com (Chris Hecker) Date: Sat, 12 Mar 2011 20:28:00 -0800 Subject: best way to not cache large files? Message-ID: <4D7C47D0.9050809@d6.com> I have a 400mb file that I just want apache to serve. What's the best way to do this? I can put it in a directory and tell varnish not to cache stuff that matches that dir, but I'd rather just make a general rule that varnish should ignore >=20mb files or whatever. Thanks, Chris From straightflush at gmail.com Sun Mar 13 15:26:54 2011 From: straightflush at gmail.com (AD) Date: Sun, 13 Mar 2011 10:26:54 -0400 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: i dont think you can check the body size (at least it seems that way with the existing req.* objects ). If you know the mime-type of the file you might just be able to pipe the mime type if that works for all file sizes ? I wonder if there is a way to pass the req object into some inline C that can access the body somehow? On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best way > to do this? I can put it in a directory and tell varnish not to cache stuff > that matches that dir, but I'd rather just make a general rule that varnish > should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kacperw at gmail.com Sun Mar 13 17:30:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Sun, 13 Mar 2011 17:30:44 +0100 Subject: VCL BNF Message-ID: Varnish Control Language grammar in BNF notation ================================================ The VCL compiler is a one-step lex-parse-prune-symtable-typecheck-emit compiler. Having looked for it several times myself, and having discussed it with several others the conclusion was that VCL needs a proper grammar. Grammars, as many know, are useful in several circumstances. BNF based on PHK's precedence rules http://www.varnish-cache.org/docs/trunk/phk/vcl_expr.html as well as vcc_Lexer and vcc_Parse from HEAD. For those of us unfamiliar with BNF: http://www.cui.unige.ch/db-research/Enseignement/analyseinfo/AboutBNF.html Note on BNF syntax: As the BNF canon is somewhat unweildy, I've opted for the convention of specifying terminal tokens in lowercase, while non-terminals are denoted in UPPERCASE. Optional statements are the usual [..] and repeated statements are {..}. To improve portability there are quotes around literals as this does not sacrifice readability. As for token and production names, I've tried to stay as true to the source code as possible without sacrificing readability. As an extension to BNF I have included comments, which are lines starting with '#'. I have attempted to comment grammar particular to major versions of Varnish and other notables. I have not backward-checked the grammar, and would appreciate comments on what grammar differences we see in V2.0 and 2.1 as compared to 3.0. There are bound to be bugs. Feedback and comments appreciated. v0.1 .. not yet machine parsable(?)! Nonterminals ------------ VCL ::= ACL | SUB | BACKEND | DIRECTOR | PROBE | IMPORT | CSRC ACL ::= 'acl' identifier '{' {ACLENTRY} '}' SUB ::= 'sub' identifier COMPOUND BACKEND ::= 'backend' identifier '{' { ['set|backend'] BACKENDSPEC } '}' PROBE ::= 'probe' identifier '{' PROBESPEC '}' # VMod imports are new in 3.0 IMPORT ::= 'import' identifier [ 'from' string ] ';' CSRC ::= 'C{' inline-c '}C' # director definitions - simple variant DIRECTOR ::= 'director' dirtype identifier '{' DIRSPEC '}' dirtype ::= 'hash' | 'random' | 'client' | 'round-robin' | 'dns' # can do better: specify production rule for every director type DIRECTOR ::= 'director' ('hash'|'random'|'client')' identifier '{' DIRSPEC '}' 'director' 'round-robin' identifier '{' { '.' BACKENDEF } '}' 'director' 'dns' identifier '{' DNSSPEC '}' DIRSPEC ::= [ '.' 'retries' '=' uintval ';' ] { '{' '.' BACKENDEF [ '.' 'weight' '=' numval ';' ] '}' } DNSSPEC ::= { '.' BACKENDEF } [ '.' 'ttl' '=' timeval ';' ] [ '.' 'suffix' '=' string ';' ] [ '.' DNSLIST ] DNSLIST ::= '{' { iprange ';' [ BACKENDSPEC ] } '}' BACKENDEF ::= 'backend' ( BACKENDSPEC | identifier ';' ) # field spec as used in backend and probe definitions SPEC ::= '{' { '.' identifier = fieldval ';' } '}' # can do better: devil is in the detail on this one BACKENDSPEC ::= '.' 'host' '=' string ';' | '.' 'port' '=' string ';' # wow I had no idea... | '.' 'host_header' '=' string ';' | '.' 'connect_timeout''=' timeval ';' | '.' 'first_byte_timeout' '=' timeval ';' | '.' 'between_bytes_timeout' '=' timeval ';' | '.' 'max_connections '=' uintval ';' | '.' 'saintmode_treshold '=' uintval ';' | '.' 'probe' '{' {PROBESPEC} '}' ';' # another woww \0/ | '.' 'probe' identifier; PROBESPEC ::= '.' 'url' = string ';' | '.' 'request' = string ';' | '.' 'expected_response' = uintval ';' | '.' 'timeout' = timeval ';' | '.' 'interval' = timeval ';' | '.' 'window' = uintval ';' | '.' 'treshold' = uintval ';' | '.' 'initial' = uintval ';' # there is no room in BNF for 'either !(..) or (!..) or !..' (parens optional) ACLENTRY ::= ['!'] ['('] ['!'] iprange [')'] ';' # totally avoids dangling else yarr IFSTMT ::= 'if' CONDITIONAL COMPOUND [ { ('elsif'|'elseif') CONDITIONAL COMPOUND } [ 'else' COMPOUND ]] CONDITIONAL ::= '(' EXPR ')' COMPOUND ::= '{' {STMT} '}' STMT ::= COMPOUND | IFSTMT | CSRC | ACTIONSTMT ';' ACTIONSTMT ::= ACTION | FUNCALL ACTION :== 'error' [ '(' EXPR(int) [ ',' EXPR(string) ] ')' | EXPR(int) [ EXPR(string) ] | 'call' identifier # in vcl_fetch only | 'esi' # in vcl_hash only | 'hash_data' '(' EXPRESSION ')' | 'panic' EXPRESSION # note: purge expressions are semantically special | 'purge' '(' EXPRESSION ')' | 'purge_url' '(' EXPRESSION ')' | 'remove' variable # V2.0: could do actions without return keyword | 'return' '(' ( deliver | error | fetch | hash | lookup | pass | pipe | restart ) ')' # rollback what? | 'rollback' | 'set' variable assoper EXPRESSION | 'synthetic' EXPRESSION | 'unset' variable FUNCALL ::= variable '(' [ { FUNCALL | expr | string-list } ] ')' EXPRESSION ::= 'true' | 'false' | constant | FUNCALL | variable | '(' EXPRESSION ')' | number '*' number | number '/' number # add two strings without operator in 2.x series | duration '*' doubleval | string '+' string | number '+' number | number '-' number | timeval '+' duration | timeval '-' duration | timeval '-' timeval | duration '+' duration | duration '-' duration | EXPRESSION comparison EXPRESSION | '!' EXPRESSION | EXPRESSION '&&' EXPRESSION | EXPRESSION '||' EXPRESSION Terminals: ----------------- timeval ::= doubleval timeunit duration ::= ['-'] timeval doubleval ::= { number [ '.' [number] ] } timeunit ::= 'ms' | 's' | 'm' | 'h' | 'd' | 'w' uintval ::= { number } # unsigned fieldval ::= timeval | doubleval | timeunit | uintval constant ::= string | fieldval iprange ::= string [ '/' number ] variable ::= identifier [ '.' identifier ] comparison ::= '==' | '!=' | '<' | '>' | '<= | '>=' | '~' | '!~' assoper ::= '=' | '+=' | '-=' | '*=' | '/=' | comment ::= /* !(/*|*/)* */ // !(\n)* $ # !(\n)* $ long-string ::= '{"' !("})* '"}' shortstring ::= '"' !(\")* '"' inline-c ::= !(('}C') string ::= shortstring | longstring identifier ::= [a-zA-Z][a-zA-Z0-9_-]* number ::= [0-9]+ Lexer tokens: ----------------- ! % & + * , - . / ; < = > { | } ~ ( ) != NEQ !~ NOMATCH ++ INC += INCR *= MUL -- DEC -= DECR /= DIV << SHL <= LEQ == EQ >= GEQ >> SHR || COR && CAND elseif ELSEIF elsif ELSIF include INCLUDE if IF # include statements omitted as they are pre-processed away, they are not a syntactic device. -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From phk at phk.freebsd.dk Sun Mar 13 17:39:32 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 13 Mar 2011 16:39:32 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:30:44 +0100." Message-ID: <10462.1300034372@critter.freebsd.dk> In message , Kacp er Wysocki writes: >Varnish Control Language grammar in BNF notation >================================================ Not bad! Put it in a wiki page. If you don't have wiki bit, contact me with your trac login, and I'll give you one. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Sun Mar 13 17:49:24 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 17:49:24 +0100 Subject: VCL BNF In-Reply-To: <10462.1300034372@critter.freebsd.dk> References: <10462.1300034372@critter.freebsd.dk> Message-ID: On Sun, Mar 13, 2011 at 5:39 PM, Poul-Henning Kamp wrote: > In message , > Kacp > er Wysocki writes: > > > >Varnish Control Language grammar in BNF notation > >================================================ > > Not bad! > > Put it in a wiki page. If you don't have wiki bit, contact me with > your trac login, and I'll give you one. > Shouldn't we rather keep it in the reference docs? -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Sun Mar 13 22:22:03 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 10:22:03 +1300 (NZDT) Subject: Always sending gzip? Message-ID: Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. Running 2.1.5 from the rpm packages Our config has: # If Accept-Encoding contains "gzip" then make it only include that. If not # then remove header completely. deflate just causes problems # if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|mp4|flv|pdf)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } But: $ curl -v -H "Accept-Encoding: fff" -H "Host: www.xxxx.com" http://yyy/themes/0/scripts/getTime.cfm > /dev/null > GET /themes/0/scripts/getTime.cfm HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Accept: */* > Accept-Encoding: fff > Host: www.xxxx.com > < HTTP/1.1 200 OK < Server: Apache < Cache-Control: max-age=300 < X-UA-Compatible: IE=EmulateIE7 < Content-Type: text/javascript < Proxy-Connection: Keep-Alive < Content-Encoding: gzip < Content-Length: 176 < Date: Sun, 13 Mar 2011 21:13:24 GMT < Connection: keep-alive < Cache-Info: Object-Age=228, hits=504, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 SessionOpen c 1.2.3.4 21147 :80 34 ReqStart c 1.2.3.4 21147 248469172 34 RxRequest c GET 34 RxURL c /themes/0/scripts/getTime.cfm 34 RxProtocol c HTTP/1.1 34 RxHeader c User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 34 RxHeader c Accept: */* 34 RxHeader c Accept-Encoding: fff 34 RxHeader c Host: www.xxx.com 34 VCL_call c recv lookup 34 VCL_call c hash hash 34 Hit c 248452316 34 VCL_call c hit deliver 34 VCL_call c deliver deliver 34 TxProtocol c HTTP/1.1 34 TxStatus c 200 34 TxResponse c OK 34 TxHeader c Server: Apache 34 TxHeader c Cache-Control: max-age=300 34 TxHeader c X-UA-Compatible: IE=EmulateIE7 34 TxHeader c Content-Type: text/javascript 34 TxHeader c Proxy-Connection: Keep-Alive 34 TxHeader c Content-Encoding: gzip 34 TxHeader c Content-Length: 176 34 TxHeader c Accept-Ranges: bytes 34 TxHeader c Date: Sun, 13 Mar 2011 21:11:36 GMT 34 TxHeader c Connection: keep-alive 34 TxHeader c Cache-Info: Object-Age=120, hits=243, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 Length c 176 34 ReqEnd c 248469172 1300050696.585048914 1300050696.585428953 0.000026941 0.000339031 0.000041008 -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From perbu at varnish-software.com Sun Mar 13 22:37:59 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 22:37:59 +0100 Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even > when we don't ask for it. > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it and Varnish will do the right thing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Sun Mar 13 22:51:24 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Sun, 13 Mar 2011 21:51:24 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Did anyone manage to find a workable solution for this? On 10 Mar 2011, at 13:29, Ronan Mullally wrote: > Hej Tollef, > > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > >> | 33 FetchError c http first read error: -1 0 (Success) >> >> This just means the backend closed the connection on us. >> >> | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) >> >> This is a timeout, however: >> >> | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 >> >> That 2.89s backend response time doesn't add up with your timeouts. Can >> you see if you can get a tcpdump of what's going on? > > I'll see what I can do. Varnish is serving an average of about 20 objects > per second so there'll be a lot of data to gather / sift through. > > The following numbers might prove useful - they're counts of the number of > successful GETs, POSTs and 503s since 17:00 yesterday. > > GET POST > Hour 200 503 200 503 > ------------------------------------------ > 17:00 72885 0 (0.00%) 841 0 (0.00%) > 18:00 69266 0 (0.00%) 858 6 (0.70%) > 19:00 65030 0 (0.00%) 866 3 (0.35%) > 20:00 70289 0 (0.00%) 975 8 (0.82%) > 21:00 105767 0 (0.00%) 1214 5 (0.41%) > 22:00 86236 0 (0.00%) 834 3 (0.36%) > 23:00 67078 0 (0.00%) 893 2 (0.22%) > 00:00 48042 0 (0.00%) 669 4 (0.60%) > 01:00 35966 0 (0.00%) 479 0 (0.00%) > 02:00 29598 0 (0.00%) 395 3 (0.76%) > 03:00 25819 0 (0.00%) 359 0 (0.00%) > 04:00 22835 0 (0.00%) 366 4 (1.09%) > 05:00 24487 0 (0.00%) 315 1 (0.32%) > 06:00 26583 0 (0.00%) 353 4 (1.13%) > 07:00 30433 0 (0.00%) 398 2 (0.50%) > 08:00 37394 0 (0.00%) 363 9 (2.48%) > 09:00 44462 1 (0.00%) 526 4 (0.76%) > 10:00 49891 2 (0.00%) 611 4 (0.65%) > 11:00 54826 1 (0.00%) 599 7 (1.17%) > 12:00 60765 6 (0.01%) 615 1 (0.16%) > 13:00 18941 0 (0.00%) 190 0 (0.00%) > > Apart from a handful of 503s to GET requests this morning (which I've not > had a chance to investigate) the problem almost exclusively affects POSTs. > The frequency of the problem does not appear to be related to the load - > the highest incidence does not match the busiest periods. > > I'll get back to you when I have a few packet traces. It will most likely > be next week. FWIW, I forgot to mention in my previous posts, I'm running > 2.1.5 on a Debian Lenny VM. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit http://www.messagelabs.com/email > ______________________________________________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. From simon at darkmere.gen.nz Mon Mar 14 00:12:50 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 12:12:50 +1300 (NZDT) Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, 13 Mar 2011, Per Buer wrote: > On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. > > > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it > and Varnish will do the right thing. Of course, it was turning up sometimes but not always. I've changed the backend to force it in and that seems to have fixed the problem (and hopefully another we are seeing). Thankyou -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From simon at darkmere.gen.nz Mon Mar 14 03:36:39 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 15:36:39 +1300 (NZDT) Subject: Refetch new page according to result? Message-ID: This looks impossible but I thought I'd ask. The idea I had was that the cache could fetch a page and according to the result fetch another page an serve that to the user. So I could look for a 301 and if the 301 pointed to my domain I could refetch the new URL and deliver that content (without giving the user a 301). However going through the docs this appears to be impossible since I won't know the result of the backend call until vcl_fetch or vcl_deliver and neither of these give me the option to go back to vcl_recv This is for archived pages, so the app would check the archive status early in the transaction and just return a quick pointer to the archive url (which might be just flat file on disk) which varnish could grab, serve and cache forever with the user not being redirected. -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From phk at phk.freebsd.dk Mon Mar 14 08:19:38 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:19:38 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:49:24 +0100." Message-ID: <22707.1300087178@critter.freebsd.dk> In message , Per Buer writes: >> >> >Varnish Control Language grammar in BNF notation >> >================================================ >> >> Not bad! >> >> Put it in a wiki page. If you don't have wiki bit, contact me with >> your trac login, and I'll give you one. >> > >Shouldn't we rather keep it in the reference docs? Works for me too -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Mon Mar 14 08:29:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 14 Mar 2011 08:29:56 +0100 Subject: Refetch new page according to result? In-Reply-To: (Simon Lyall's message of "Mon, 14 Mar 2011 15:36:39 +1300 (NZDT)") References: Message-ID: <87oc5erwgb.fsf@qurzaw.varnish-software.com> ]] Simon Lyall | However going through the docs this appears to be impossible since I | won't know the result of the backend call until vcl_fetch or | vcl_deliver and neither of these give me the option to go back to | vcl_recv You should be able to just change req.url and restart in vcl_fetch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From schmidt at ze.tum.de Mon Mar 14 08:45:06 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 08:45:06 +0100 Subject: SSL In-Reply-To: <82042.1299771660@critter.freebsd.dk> References: <82042.1299771660@critter.freebsd.dk> Message-ID: <4D7DC782.6050300@ze.tum.de> Am 10.03.2011 16:41, schrieb Poul-Henning Kamp: > In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ > es: >> Is the current thinking still that SSL support will not be >> integrated into varnish? > > Yes, that is current thinking. I can see no advantages that outweigh > the disadvantages, and a realistic implementation would not be > significantly different from running a separate process to do the > job in the first place. stunnel has the disatwantage that we loose the clientIP information. Intigration of SSL in varnish wouldn't have this problem. with pound thios can be fixen by analysing the forewarded-for header but isn't that elegant. Regards Estartu -- ---------------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de Technische Universit?t M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: +49 89 289-25270 | PGP-PublicKey Fax: +49 89 289-25257 | on request -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From phk at phk.freebsd.dk Mon Mar 14 08:55:40 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:55:40 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 08:45:06 +0100." <4D7DC782.6050300@ze.tum.de> Message-ID: <41707.1300089340@critter.freebsd.dk> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >stunnel has the disatwantage that we loose the clientIP information. Doesn't it set a header with this information ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 09:06:28 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 09:06:28 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <4D7DC782.6050300@ze.tum.de> <41707.1300089340@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:55 AM, Poul-Henning Kamp wrote: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > > >stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? > Yes. If we use the patched stunnel version that haproxy also uses. It requires Varnish to understand the protocol however, as the address of the client is sent at the beginning of the conversation in binary form. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 09:14:51 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 08:14:51 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 09:06:28 +0100." Message-ID: <41829.1300090491@critter.freebsd.dk> In message , Per Buer writes: >Yes. If we use the patched stunnel version that haproxy also uses. It >requires Varnish to understand the protocol however, as the address of the >client is sent at the beginning of the conversation in binary form. I would say "Use a more intelligent SSL proxy" then... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From rtshilston at gmail.com Mon Mar 14 09:22:02 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Mon, 14 Mar 2011 08:22:02 +0000 Subject: SSL In-Reply-To: <41829.1300090491@critter.freebsd.dk> References: <41829.1300090491@critter.freebsd.dk> Message-ID: <4A7E853B-F74C-415F-B324-6FEBCDA0D7E5@gmail.com> On 14 Mar 2011, at 08:14, Poul-Henning Kamp wrote: > In message , Per > Buer writes: > >> Yes. If we use the patched stunnel version that haproxy also uses. It >> requires Varnish to understand the protocol however, as the address of the >> client is sent at the beginning of the conversation in binary form. > > I would say "Use a more intelligent SSL proxy" then... We're using Varnish successfully with nginx. The config looks like: ===== worker_processes 1; error_log /var/log/nginx/global-error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { ssl on; ssl_certificate /etc/ssl/example.com.crt; ssl_certificate_key /etc/ssl/example.com.key; listen a.b.c.4 default ssl; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Proxy any requests to the local varnish instance location / { proxy_set_header "Host" $host; proxy_set_header "X-Forwarded-By" "Nginx-a.b.c.4"; proxy_set_header "X-Forwarded-For" $proxy_add_x_forwarded_for; proxy_pass a.b.c.5; } } } ==== From schmidt at ze.tum.de Mon Mar 14 09:34:41 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 09:34:41 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <41707.1300089340@critter.freebsd.dk> Message-ID: <4D7DD321.7000906@ze.tum.de> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > >> stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? It's a tunnel. It doesn't change the stream. As I said, we use pound because it sets the header. But its another daemon to run and to setup. Another component that could fail. Integrating SSL in varnish would reduce the complexity. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From kacperw at gmail.com Mon Mar 14 11:16:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 11:16:44 +0100 Subject: VCL BNF In-Reply-To: <22707.1300087178@critter.freebsd.dk> References: <22707.1300087178@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:19 AM, Poul-Henning Kamp wrote: > In message , Per > Buer writes: >>> >Varnish Control Language grammar in BNF notation >>> >>> Not bad! >>> >>> Put it in a wiki page. ?If you don't have wiki bit, contact me with >>> your trac login, and I'll give you one. >>> >> >>Shouldn't we rather keep it in the reference docs? > > Works for me too The BNF might not be 100% complete yet - there might be bugs - so wiki is appropriate. kwy is my trac login. 0K From phk at phk.freebsd.dk Mon Mar 14 11:23:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 10:23:08 +0000 Subject: VCL BNF In-Reply-To: Your message of "Mon, 14 Mar 2011 11:16:44 +0100." Message-ID: <69458.1300098188@critter.freebsd.dk> In message , Kacp er Wysocki writes: >The BNF might not be 100% complete yet - there might be bugs - so wiki >is appropriate. kwy is my trac login. Agreed. You should have wiki bit now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kacperw at gmail.com Mon Mar 14 12:05:27 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:05:27 +0100 Subject: SSL In-Reply-To: <4D7DD321.7000906@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: > Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >> >>> stunnel has the disatwantage that we loose the clientIP information. >> >> Doesn't it set a header with this information ? > > It's a tunnel. It doesn't change the stream. As I said, we use pound because > it sets the header. But its another daemon to run and to setup. Another > component that could fail. Integrating SSL in varnish would reduce the > complexity. What you meant to say is "integrating SSL in Varnish would increase complexity". Putting that component inside varnish doesn't automatically make it infallable. As an added bonus, if SSL is in a separate process it won't bring the whole server down if it fails, if that's the kind of stuff you're worried about. 0K -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From kacperw at gmail.com Mon Mar 14 12:21:03 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:21:03 +0100 Subject: VCL BNF In-Reply-To: <69458.1300098188@critter.freebsd.dk> References: <69458.1300098188@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 11:23 AM, Poul-Henning Kamp wrote: > In message , > Kacper Wysocki writes: > >>The BNF might not be 100% complete yet - there might be bugs - so wiki >>is appropriate. kwy is my trac login. > > Agreed. > > You should have wiki bit now. http://www.varnish-cache.org/trac/wiki/VCL.BNF I put a link under Documentation. 0K From schmidt at ze.tum.de Mon Mar 14 13:00:23 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 13:00:23 +0100 Subject: SSL In-Reply-To: References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: <4D7E0357.4070204@ze.tum.de> Am 14.03.2011 12:05, schrieb Kacper Wysocki: > On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: >> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >>> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >>> >>>> stunnel has the disatwantage that we loose the clientIP information. >>> >>> Doesn't it set a header with this information ? >> >> It's a tunnel. It doesn't change the stream. As I said, we use pound because >> it sets the header. But its another daemon to run and to setup. Another >> component that could fail. Integrating SSL in varnish would reduce the >> complexity. > > What you meant to say is "integrating SSL in Varnish would increase > complexity". > Putting that component inside varnish doesn't automatically make it > infallable. As an added bonus, if SSL is in a separate process it > won't bring the whole server down if it fails, if that's the kind of > stuff you're worried about. It does kill your serive if your service is SSL based. Managing more config and more daemons always increses the complexity. More Daemons increse the probabilty of failure and increase the monitioring requirements. More Daemons increase the probailty of security problems. More Daemons increase the amount of time spend keepings the system up to date. It might increase the complexity of varnish but not the system a hole. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From perbu at varnish-software.com Mon Mar 14 13:10:41 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 13:10:41 +0100 Subject: SSL In-Reply-To: <4D7E0357.4070204@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:00 PM, Gerhard Schmidt wrote: > > It does kill your serive if your service is SSL based. > > Managing more config and more daemons always increses the complexity. > More Daemons increse the probabilty of failure and increase the monitioring > requirements. > More Daemons increase the probailty of security problems. > More Daemons increase the amount of time spend keepings the system up to > date. > First of all. Varnish is probably never getting SSL support built in so you can stop beating that horse. Also, in my opinion, it's easier to have two simple systems than one complex system. Having small dedicated programs is the beautiful design principle of Unix and as long as it won't influence performance I'm sold. IMO this is mostly a packaging issue. If we repackage stunnel as "varnish-ssl" and makes it "just work" it will be dead simple. It does however, put the pressure on us to maintain it, but that is minor. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 13:17:59 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 12:17:59 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 13:00:23 +0100." <4D7E0357.4070204@ze.tum.de> Message-ID: <54017.1300105079@critter.freebsd.dk> In message <4D7E0357.4070204 at ze.tum.de>, Gerhard Schmidt writes: >Managing more config and more daemons always increses the complexity. >More Daemons increse the probabilty of failure and increase the monitioring >requirements. >More Daemons increase the probailty of security problems. >More Daemons increase the amount of time spend keepings the system up to date. > >It might increase the complexity of varnish but not the system a hole. I can absolute guarantee you, that there would be no relevant difference in complexity, because the only way we can realistically add SSL to varnish is to start another daemon process to do it. Adding that complexity to Varnish will decrese the overall security relative to having the SSL daemon be an self-contained piece of software, simply as a matter of code complexity. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 14:06:15 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 14:06:15 +0100 Subject: SSL In-Reply-To: <4D7E10F9.1040904@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> <4D7E10F9.1040904@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:58 PM, Gerhard Schmidt wrote: > > > Also, in my opinion, it's easier to have two simple systems than one > complex > > system. Having small dedicated programs is the beautiful design principle > of > > Unix and as long as it won't influence performance I'm sold. > > If there was a way to use simple dedicated service without loosing > information > this would be correct. But there isn't a simple daemon to accept ssl > connections for varnish without loosing the Client Information. > You didn't read the whole thread, did you? You obviously don't know about the PROXY protocol mode of the patched stunnel version we're talking about. It requires slight modifications of Varnish and would transmit client.ip initially when talking with Varnish. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.chiswell at mangahigh.com Mon Mar 14 18:02:16 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:02:16 +0000 Subject: VCL Formatting Message-ID: <4D7E4A18.3030701@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From richard.chiswell at mangahigh.com Mon Mar 14 18:04:28 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:04:28 +0000 Subject: VCL Formatting Message-ID: <4D7E4A9C.6020907@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From perbu at varnish-software.com Mon Mar 14 18:33:14 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 18:33:14 +0100 Subject: VCL Formatting In-Reply-To: <4D7E4A18.3030701@mangahigh.com> References: <4D7E4A18.3030701@mangahigh.com> Message-ID: Hi, On Mon, Mar 14, 2011 at 6:02 PM, Richard Chiswell < richard.chiswell at mangahigh.com> wrote: > Hi, > > Does any know of, or have written, a code formatter for Varnish's VCL files > which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? > I use c-mode in Emacs - works ok for my somewhat limited needs. There probably is some codematting stuff for C you can use. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Mon Mar 14 23:30:14 2011 From: checker at d6.com (Chris Hecker) Date: Mon, 14 Mar 2011 15:30:14 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7E96F6.4060707@d6.com> Anybody have any ideas? They're not all the same mime type, so I think putting them in an uncached dir is better if there's no way to figure it out in vcl. Chris On 2011/03/13 07:26, AD wrote: > i dont think you can check the body size (at least it seems that way > with the existing req.* objects ). If you know the mime-type of the > file you might just be able to pipe the mime type if that works for all > file sizes ? > > I wonder if there is a way to pass the req object into some inline C > that can access the body somehow? > > On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > wrote: > > > I have a 400mb file that I just want apache to serve. What's the > best way to do this? I can put it in a directory and tell varnish > not to cache stuff that matches that dir, but I'd rather just make a > general rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From moseleymark at gmail.com Mon Mar 14 23:51:19 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 14 Mar 2011 15:51:19 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7E96F6.4060707@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > Anybody have any ideas? ?They're not all the same mime type, so I think > putting them in an uncached dir is better if there's no way to figure it out > in vcl. > > Chris > > > > On 2011/03/13 07:26, AD wrote: >> >> i dont think you can check the body size (at least it seems that way >> with the existing req.* objects ). ?If you know the mime-type of the >> file you might just be able to pipe the mime type if that works for all >> file sizes ? >> >> I wonder if there is a way to pass the req object into some inline C >> that can access the body somehow? >> >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > > wrote: >> >> >> ? ?I have a 400mb file that I just want apache to serve. ?What's the >> ? ?best way to do this? ?I can put it in a directory and tell varnish >> ? ?not to cache stuff that matches that dir, but I'd rather just make a >> ? ?general rule that varnish should ignore >=20mb files or whatever. >> >> ? ?Thanks, >> ? ?Chris I was asking about the same thing in this thread: http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 Check out Tollef's suggestion towards the end. That's what I've been using. The one drawback is that it's still fetched by varnish *completely* in the first, not-yet-restarted request, which means that a) you're fetching it twice; and b) it'll still stored albeit momentarily, so it'll evict stuff if there's not enough room. Before that, I wasn't sending any reqs for anything matching stuff like .avi or .wmv to varnish (from an nginx frontend). It'd be kind of neat if you could do a call-out and for anything matching a likely large file (i.e. has extension matching .avi, .wmv, etc), and do a HEAD request to determine the response size (or whatever you wanted to look for) before doing the GET. From straightflush at gmail.com Tue Mar 15 02:48:08 2011 From: straightflush at gmail.com (AD) Date: Mon, 14 Mar 2011 21:48:08 -0400 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: whats interesting is the last comment All this happens over localhost, so it's quite fast, but in the | interest of efficiency, is there something I can set or call so that | it closes that first connection almost immediately? Having to refetch | a 800meg file off of NFS might hurt -- even if a good chunk of it is | still in the OS block cache. You'd need to do this using inline C, but yes, anything is possible. (Sorry, I don't have an example for it here) What do you need to do via inline C to prevent the full 800 MB from being downloaded even the first time? On Mon, Mar 14, 2011 at 6:51 PM, Mark Moseley wrote: > On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > > > Anybody have any ideas? They're not all the same mime type, so I think > > putting them in an uncached dir is better if there's no way to figure it > out > > in vcl. > > > > Chris > > > > > > > > On 2011/03/13 07:26, AD wrote: > >> > >> i dont think you can check the body size (at least it seems that way > >> with the existing req.* objects ). If you know the mime-type of the > >> file you might just be able to pipe the mime type if that works for all > >> file sizes ? > >> > >> I wonder if there is a way to pass the req object into some inline C > >> that can access the body somehow? > >> > >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker >> > wrote: > >> > >> > >> I have a 400mb file that I just want apache to serve. What's the > >> best way to do this? I can put it in a directory and tell varnish > >> not to cache stuff that matches that dir, but I'd rather just make a > >> general rule that varnish should ignore >=20mb files or whatever. > >> > >> Thanks, > >> Chris > > > I was asking about the same thing in this thread: > > http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 > > Check out Tollef's suggestion towards the end. That's what I've been > using. The one drawback is that it's still fetched by varnish > *completely* in the first, not-yet-restarted request, which means that > a) you're fetching it twice; and b) it'll still stored albeit > momentarily, so it'll evict stuff if there's not enough room. > > Before that, I wasn't sending any reqs for anything matching stuff > like .avi or .wmv to varnish (from an nginx frontend). > > It'd be kind of neat if you could do a call-out and for anything > matching a likely large file (i.e. has extension matching .avi, .wmv, > etc), and do a HEAD request to determine the response size (or > whatever you wanted to look for) before doing the GET. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Tue Mar 15 08:42:46 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 00:42:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F17D5.2090002@bizztravel.nl> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> Message-ID: <4D7F1876.7080809@d6.com> Yeah, I think if I can't do it Right (which I define as checking the file size in the vcl), then I'm just going to make blah.com/uncached/* be uncached. I don't want to transfer it once just to throw it away. Chris On 2011/03/15 00:40, Martin Boer wrote: > I've been reading this discussion and imho the most elegant way to do it > is to have a upload directory X and 2 download directories Y and Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> I have a 400mb file that I just want apache to serve. What's the best >> way to do this? I can put it in a directory and tell varnish not to >> cache stuff that matches that dir, but I'd rather just make a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > From martin.boer at bizztravel.nl Tue Mar 15 08:40:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 15 Mar 2011 08:40:05 +0100 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7F17D5.2090002@bizztravel.nl> I've been reading this discussion and imho the most elegant way to do it is to have a upload directory X and 2 download directories Y and Z with a script in between that decides whether it's cacheable and move the file to Y or uncacheable and put it in Z. All the other solutions mentioned in between are far more intelligent and much more likely to backfire in some way or another. Just my 2 cents. Martin On 03/13/2011 05:28 AM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best > way to do this? I can put it in a directory and tell varnish not to > cache stuff that matches that dir, but I'd rather just make a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From perbu at varnish-software.com Tue Mar 15 10:46:16 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 15 Mar 2011 10:46:16 +0100 Subject: Online training Message-ID: Hi List. I promise I won't do this to often but I wanted to make you aware that we (Varnish Software) will now be offering online training. We have free seats in the upcoming session on the 24th and 25th of March (targeted mainly towards European time zones). We'll have sessions for US timezones in April. We're also planning a session for NZ and Aussies, but no date is set for this session yet. If your interested please drop me a mail. All our training is conducted by varnish cache committers. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:50:30 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:50:30 -0300 Subject: Online training In-Reply-To: References: Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:51:13 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:51:13 -0300 Subject: VarnishLog: Broken pipe (Debug) Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Tue Mar 15 17:44:26 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 15 Mar 2011 09:44:26 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, Mar 15, 2011 at 12:42 AM, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. ?I don't want to transfer it once just to throw it away. > > Chris > > > On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris Yeah, if you have control over directory names, that's by far the better way to go. I've got shared hosting customers behind mine, so I've got practically no control over where they put stuff under their webroot. From kbrownfield at google.com Tue Mar 15 21:16:11 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 13:16:11 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I'm assuming that in this case it's not possible for you to have the backend server emit an appropriate Cache-Control or Expires header based on the size of the file? The server itself will know the file size before transmission, and the reindeer caching games would not be necessary. ;-) That's definitely the Right Way, but it would require control over the backend, which is often not possible. Apache unfortunately doesn't have a built-in mechanism/module to emit a header based on file size, at least that I can find. :-( -- kb On Tue, Mar 15, 2011 at 00:42, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. I don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >>> >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From do.not.eat.yellow.snow at gmail.com Tue Mar 15 21:30:02 2011 From: do.not.eat.yellow.snow at gmail.com (Martin Strand) Date: Tue, 15 Mar 2011 21:30:02 +0100 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield wrote: > > Apache unfortunately doesn't have a built-in mechanism/module to emit a > header based on file size What about the "Content-Length" header? Apache seems to emit that automatically. From kbrownfield at google.com Tue Mar 15 22:59:49 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 14:59:49 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I think mod_headers/SetEnvIf/etc is applied at request time, before processing occurs (the parameters they have available to them are quite limited). But there may be a way to do later in the chain, and certainly with a custom mod. -- kb On Tue, Mar 15, 2011 at 13:30, Martin Strand < do.not.eat.yellow.snow at gmail.com> wrote: > On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield > wrote: > >> >> Apache unfortunately doesn't have a built-in mechanism/module to emit a >> header based on file size >> > > What about the "Content-Length" header? Apache seems to emit that > automatically. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Wed Mar 16 00:56:37 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 16:56:37 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: <4D7FFCB5.6030105@d6.com> I'm not sure I understand. I have control over the back end, the front end, the middle end, all the ends. However, I thought the problem was there was no way to get varnish to read the header without loading the file into the cache? If that's not true, then shouldn't Content-Length be enough? Chris On 2011/03/15 13:16, Ken Brownfield wrote: > I'm assuming that in this case it's not possible for you to have the > backend server emit an appropriate Cache-Control or Expires header based > on the size of the file? The server itself will know the file size > before transmission, and the reindeer caching games would not be > necessary. ;-) > > That's definitely the Right Way, but it would require control over the > backend, which is often not possible. Apache unfortunately doesn't have > a built-in mechanism/module to emit a header based on file size, at > least that I can find. :-( > -- > kb > > > > On Tue, Mar 15, 2011 at 00:42, Chris Hecker > wrote: > > > Yeah, I think if I can't do it Right (which I define as checking the > file size in the vcl), then I'm just going to make > blah.com/uncached/* be uncached. I > don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > > I've been reading this discussion and imho the most elegant way > to do it > is to have a upload directory X and 2 download directories Y and > Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more > intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: > > > I have a 400mb file that I just want apache to serve. What's > the best > way to do this? I can put it in a directory and tell varnish > not to > cache stuff that matches that dir, but I'd rather just make > a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Wed Mar 16 03:45:46 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 19:45:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7FFCB5.6030105@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> <4D7FFCB5.6030105@d6.com> Message-ID: If you have control over the backend (Apache) it should be made to emit a Cache-Control or Expires header to Varnish to make the object non-cacheable *if the file is too large*. Apache will know the file's size before a request occurs. I was talking about logic within Apache, not Varnish. This is how it's "supposed" to happen. With Varnish, I see no way to avoid downloading the entire file every time. You can control whether the file *stays* in cache, but that's it. If there were a URL pattern (e.g., magic subdirectory), you could conceivably switch to pipe in those cases. Thinking out loud... HTTP servers will send a response to a HEAD request with a Content-Length header that represents the length of the full object had a GET been performed. If your Apache does this (some configurations will disable this), one hack would be to have Varnish send a HEAD request to Apache for every object, set a req flag if the returned content length is too large, then restart, and then have logic that will force pipe if it's too large, otherwise pass. This will double the hits to the back-end, however, so some conditionals would help (only .mov, or only a certain subdirectory, etc.) And I've never tried changing a GET to a HEAD with VCL or inline-C. But usually when something is that difficult, it's a square peg and a round hole. :-) FWIW, -- kb On Tue, Mar 15, 2011 at 16:56, Chris Hecker wrote: > > I'm not sure I understand. I have control over the back end, the front > end, the middle end, all the ends. However, I thought the problem was there > was no way to get varnish to read the header without loading the file into > the cache? If that's not true, then shouldn't Content-Length be enough? > > Chris > > On 2011/03/15 13:16, Ken Brownfield wrote: > >> I'm assuming that in this case it's not possible for you to have the >> backend server emit an appropriate Cache-Control or Expires header based >> on the size of the file? The server itself will know the file size >> before transmission, and the reindeer caching games would not be >> necessary. ;-) >> >> That's definitely the Right Way, but it would require control over the >> backend, which is often not possible. Apache unfortunately doesn't have >> a built-in mechanism/module to emit a header based on file size, at >> least that I can find. :-( >> -- >> kb >> >> >> >> On Tue, Mar 15, 2011 at 00:42, Chris Hecker > > wrote: >> >> >> Yeah, I think if I can't do it Right (which I define as checking the >> file size in the vcl), then I'm just going to make >> blah.com/uncached/* be uncached. I >> don't want to transfer it once just to throw it away. >> >> Chris >> >> >> >> On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way >> to do it >> is to have a upload directory X and 2 download directories Y and >> Z with >> a script in between that decides whether it's cacheable and move >> the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more >> intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> >> I have a 400mb file that I just want apache to serve. What's >> the best >> way to do this? I can put it in a directory and tell varnish >> not to >> cache stuff that matches that dir, but I'd rather just make >> a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Wed Mar 16 14:58:39 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 09:58:39 -0400 Subject: Session issues when using Varnish Message-ID: I have been investigating an issue on a client's website that is very peculiar. I have verified that the behavior is due to the instance of Varnish that Rackspace configured for us. However, I'm not sure if this constitutes a bug in Varnish or a configuration error. I'm hoping someone can verify it for me one way or the other. Here is the scenario: Some of our PHP pages are protected by way of verifying that certain session variables are set. If not, the user is sent to the login page. We have observed that on URLs in which there is a querystring, and when the last value of that querystring ends in ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule that routes requests from port 80 to Varnish, the session is reset completely. Oddly enough, no other extension seems to have this affect. I have recreated this behavior in a clean PHP file, which I've attached. You can test this script on your own using the following URLs. The ones marked with the * are where the session gets reset. http://localhost/test_cdb.php http://localhost/test_cdb.php?foo=1 http://localhost/test_cdb.php?foo=1&baz=bix http://localhost/test_cdb.php?foo=1&baz=bix.far http://localhost/test_cdb.php?foo=1&baz=bix.far.jpg * http://localhost/test_cdb.php?foo=1&baz=bix.fur http://localhost/test_cdb.php?foo=1&baz=bix.gif * http://localhost/test_cdb.php?foo=1&baz=bix.bmp http://localhost/test_cdb.php?foo=1&baz=bix.php http://localhost/test_cdb.php?foo=1&baz=bix.exe http://localhost/test_cdb.php?foo=1&baz=bix.tar http://localhost/test_cdb.php?foo=1&baz=bix.jpeg * Here is the rule we created for iptables -A PREROUTING -t nat -d x.x.x.128 -p tcp -m tcp --dport 80 -j DNAT --to-destination x.x.x.128:6081 Chris Bloom Internet Application Developer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_cdb.php Type: application/x-httpd-php Size: 721 bytes Desc: not available URL: From bjorn at ruberg.no Wed Mar 16 15:15:04 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 15:15:04 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80C5E8.8040503@ruberg.no> On 03/16/2011 02:58 PM, Chris Bloom wrote: > I have been investigating an issue on a client's website that is very > peculiar. I have verified that the behavior is due to the instance of > Varnish that Rackspace configured for us. However, I'm not sure if this > constitutes a bug in Varnish or a configuration error. I'm hoping > someone can verify it for me one way or the other. > > Here is the scenario: Some of our PHP pages are protected by way of > verifying that certain session variables are set. If not, the user is > sent to the login page. We have observed that on URLs in which there is > a querystring, and when the last value of that querystring ends in > ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule > that routes requests from port 80 to Varnish, the session is reset > completely. Oddly enough, no other extension seems to have this affect. This *looks* like some general Varnish rule that removes any (session) cookies when the URL (including the query string) ends with jpg, jpeg etc. However, since you did not include the Varnish configuration or Varnish logs, you will only receive guesswork. Your test file is of absolutely no value as long as you didn't a) provide the real URL for remote diagnosis and/or b) the VCL for local testing. Without any information on the Varnish configuration, further requests for assistance should be directed to your provider. You need someone with access to the VCL to be able to confirm your issue. The symptoms should be sufficiently descriptive, as long as they reach someone who can do anything about it. We can't. Good luck, -- Bj?rn From chrisbloom7 at gmail.com Wed Mar 16 16:55:56 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 11:55:56 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: Thank you, Bjorn, for your response. Our hosting provider tells me that the following routines have been added to the default config. sub vcl_recv { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.cookie; return (lookup); } } sub vcl_fetch { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.set-cookie; set obj.ttl = 1h; } } Clearly the req.url variable contains the entire request URL, including the querystring. Is there another variable that I should be using instead that would only include the script name? If this is the default behavior, I'm inclined to cry "bug". You can test that other script for yourself by substituting maxisavergroup.com for the domain in the example URLs I provided. PS: We are using Varnish 2.0.6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Wed Mar 16 17:48:47 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 16 Mar 2011 13:48:47 -0300 Subject: Limited urls Message-ID: Hi guys, I am trying to restrict some access to my Varnish. I want to accept only requests for domain1.com and domain2.com, but deny access to server's IP address. This is my vcl_recv: if (req.http.host ~ ".*domain1.*") { set req.backend = domain1; } elseif (req.http.host ~ ".*domain2.*") { set req.backend = domain2; } else { error 405 "Sorry!"; } Am I doing the right way? Do you have any suggestion? Thank you, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Wed Mar 16 19:03:52 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 19:03:52 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80FB88.5030907@ruberg.no> On 03/16/2011 04:55 PM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } This is a rather standard config, not designed for corner cases like yours. > Clearly the req.url variable contains the entire request URL, including > the querystring. Is there another variable that I should be using > instead that would only include the script name? If this is the default > behavior, I'm inclined to cry "bug". You can start crying bug after you've convinced the rest of the Internet world, including all browsers, that the query string should not be considered part of the URL. In the meantime, I suggest you let your provider know that your application has special requirements that they will need to accommodate. Your provider can't offer proper service when they don't know your requirements. To provide you with a useful Varnish configuration, your provider needs to know quite a few things about how your application works. This includes any knowledge of cookies and when Varnish should and should not allow them. Since you ask the Varnish community instead of discussing this with your provider, I guess these requirements were never communicated. A few tips you and your provider can consider: a) Perhaps a second cookie could be set by the backend application for logged-in users. A configuration could be made so that Varnish would choose to not remove cookies from the file suffixes listed if this cookie was present. b) If the path(s)/filename(s) where the query string may include the mentioned file suffixes are identifiable, your provider could create an exception for those. E.g. if ?foo=bar.jpg only occurs with /some/test/file.php, then the if clause in vcl_recv could take that into consideration. c) Regular expressions in 2.0.6 are case insensitive, so listing both "jpg" and "JPG" in the same expression is unnecessary. - Bj?rn From davidpetzel at gmail.com Wed Mar 16 19:21:21 2011 From: davidpetzel at gmail.com (David Petzel) Date: Wed, 16 Mar 2011 14:21:21 -0400 Subject: Question on Re-Using Backend Probes Message-ID: I'm really new to varnish, so please forgive me if this answered elsewhere, I did some searching and couldn't seem to find it however. I was reviewing the documention and I have a question about back end probes. I'm setting up a directory that will have 10-12 backends. I want each backend to use the same health check, but I don't want to have to re-define the prove 10-12 times. Is it possible to define the probe externally to the backend configuration, and then reference it. Something like the following? probe myProbe1 { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } backend server1 { .host = "server1.example.com"; .probe = myProbe1 } backend server2 { .host = "server2.example.com"; .probe = myProbe1 } All of the examples I've come across have the probe redefined again. for example on http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html#health-checks They show the following example which feels redundant. backend server1 { .host = "server1.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } backend server2 { .host = "server2.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:29:10 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:29:10 +0100 Subject: Question on Re-Using Backend Probes In-Reply-To: References: Message-ID: Hi David. On Wed, Mar 16, 2011 at 7:21 PM, David Petzel wrote: > I'm really new to varnish, so please forgive me if this answered elsewhere, > I did some searching and couldn't seem to find it however. > I was reviewing the documention and I have a question about back end probes. > I'm setting up a directory that will have 10-12 backends. I want each > backend to use the same health check, but I don't want to have to re-define > the prove 10-12 times. Is it possible to define the probe externally to the > backend configuration, and then reference it. No. That is not possible. However, you could use a makro language of sorts to pre-process the configuration. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From dhelkowski at sbgnet.com Wed Mar 16 19:51:35 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 14:51:35 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D8106B7.5030604@sbgnet.com> The vcl you are showing may be standard, but as you have noticed it will not work properly when query strings end in a file extension. I encountered this same problem after blindly copying from example varnish configurations. Before the check is done, the query parameter needs to be stripped from the url. Example of an alternate way to check the extensions: sub vcl_recv { ... set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } ... } Doubtless others will say this approach is wrong for some reason or another. I use it in a production environment and it works fine though. Pass it along to your hosting provider and request that they consider changing their config. Note that the above code will cause the end user to receive a 'ext' header with the file extension. You can add a 'remove req.http.ext' after the code if you don't want that to happen... Another thing to consider is that whether it this is a bug or not; it is a common problem with varnish configurations, and as such can be used on most varnish servers to force them to return things differently then they normally would. IE: if some backend script is a huge request and eats up resources, sending it a '?.jpg' could be used to hit it repeatedly and bring about a denial of service. On 3/16/2011 11:55 AM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > > Clearly the req.url variable contains the entire request URL, > including the querystring. Is there another variable that I should be > using instead that would only include the script name? If this is the > default behavior, I'm inclined to cry "bug". > > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the > example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:59:02 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:59:02 +0100 Subject: Session issues when using Varnish In-Reply-To: <4D8106B7.5030604@sbgnet.com> References: <4D8106B7.5030604@sbgnet.com> Message-ID: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ??? ... > ??? set req.http.ext = regsub( req.url, "\?.+$", "" ); > ??? set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > ??? if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > ????? return(lookup); > ??? } > ??? ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.cookie; > ?? ?return (lookup); > ??} > } > sub vcl_fetch { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.set-cookie; > ?? ?set obj.ttl = 1h; > ??} > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Wed Mar 16 20:30:31 2011 From: straightflush at gmail.com (AD) Date: Wed, 16 Mar 2011 15:30:31 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { * remove req.http.ext; * return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > Hi David, List. > > I think I'll use this snipplet in the documentation if you don't mind. > I need to work in more regsub calls there anyway. > > Cheers, > > Per. > > On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski > wrote: > > The vcl you are showing may be standard, but as you have noticed it will > not > > work properly when > > query strings end in a file extension. I encountered this same problem > after > > blindly copying from > > example varnish configurations. > > Before the check is done, the query parameter needs to be stripped from > the > > url. > > Example of an alternate way to check the extensions: > > > > sub vcl_recv { > > ... > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > > if( req.http.ext ~ > > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > > return(lookup); > > } > > ... > > } > > > > Doubtless others will say this approach is wrong for some reason or > another. > > I use it in a production > > environment and it works fine though. Pass it along to your hosting > provider > > and request that they > > consider changing their config. > > > > Note that the above code will cause the end user to receive a 'ext' > header > > with the file extension. > > You can add a 'remove req.http.ext' after the code if you don't want that > to > > happen... > > > > Another thing to consider is that whether it this is a bug or not; it is > a > > common problem with varnish > > configurations, and as such can be used on most varnish servers to force > > them to return things > > differently then they normally would. IE: if some backend script is a > huge > > request and eats up resources, sending > > it a '?.jpg' could be used to hit it repeatedly and bring about a denial > of > > service. > > > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > > > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been added > to > > the default config. > > sub vcl_recv { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.cookie; > > return (lookup); > > } > > } > > sub vcl_fetch { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.set-cookie; > > set obj.ttl = 1h; > > } > > } > > Clearly the req.url variable contains the entire request URL, including > the > > querystring. Is there another variable that I should be using instead > that > > would only include the script name? If this is the default behavior, I'm > > inclined to cry "bug". > > You can test that other script for yourself by substituting > > maxisavergroup.com for the domain in the example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 16 23:58:31 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Wed, 16 Mar 2011 15:58:31 -0700 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD wrote: > You can remove the header so it doesnt get set > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > * remove req.http.ext; * > return(lookup); > } > > > > On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > >> Hi David, List. >> >> I think I'll use this snipplet in the documentation if you don't mind. >> I need to work in more regsub calls there anyway. >> >> Cheers, >> >> Per. >> >> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >> wrote: >> > The vcl you are showing may be standard, but as you have noticed it will >> not >> > work properly when >> > query strings end in a file extension. I encountered this same problem >> after >> > blindly copying from >> > example varnish configurations. >> > Before the check is done, the query parameter needs to be stripped from >> the >> > url. >> > Example of an alternate way to check the extensions: >> > >> > sub vcl_recv { >> > ... >> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> > if( req.http.ext ~ >> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> > return(lookup); >> > } >> > ... >> > } >> > >> > Doubtless others will say this approach is wrong for some reason or >> another. >> > I use it in a production >> > environment and it works fine though. Pass it along to your hosting >> provider >> > and request that they >> > consider changing their config. >> > >> > Note that the above code will cause the end user to receive a 'ext' >> header >> > with the file extension. >> > You can add a 'remove req.http.ext' after the code if you don't want >> that to >> > happen... >> > >> > Another thing to consider is that whether it this is a bug or not; it is >> a >> > common problem with varnish >> > configurations, and as such can be used on most varnish servers to force >> > them to return things >> > differently then they normally would. IE: if some backend script is a >> huge >> > request and eats up resources, sending >> > it a '?.jpg' could be used to hit it repeatedly and bring about a denial >> of >> > service. >> > >> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >> > >> > Thank you, Bjorn, for your response. >> > Our hosting provider tells me that the following routines have been >> added to >> > the default config. >> > sub vcl_recv { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.cookie; >> > return (lookup); >> > } >> > } >> > sub vcl_fetch { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.set-cookie; >> > set obj.ttl = 1h; >> > } >> > } >> > Clearly the req.url variable contains the entire request URL, including >> the >> > querystring. Is there another variable that I should be using instead >> that >> > would only include the script name? If this is the default behavior, I'm >> > inclined to cry "bug". >> > You can test that other script for yourself by substituting >> > maxisavergroup.com for the domain in the example URLs I provided. >> > PS: We are using Varnish 2.0.6 >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> >> >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoiz.shine at gmail.com Thu Mar 17 03:34:55 2011 From: amoiz.shine at gmail.com (Sharl.Jimh.Tsin) Date: Thu, 17 Mar 2011 10:34:55 +0800 Subject: Limited urls In-Reply-To: References: Message-ID: yes,it is right. Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) 2011/3/17 Roberto O. Fern?ndez Crisial : > Hi guys, > I am trying to?restrict?some access to my Varnish. I?want?to accept only > requests for?domain1.com and?domain2.com, but deny access to server's IP > address. This is my vcl_recv: > if (req.http.host ~ ".*domain1.*") > { > > set req.backend = domain1; > > } > elseif (req.http.host ~ ".*domain2.*") > { > > set req.backend = domain2; > > } > else > { > > error 405 "Sorry!"; > > } > Am I doing the right way? Do you have any suggestion? > Thank you, > Roberto. > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From dhelkowski at sbgnet.com Thu Mar 17 03:40:16 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 22:40:16 -0400 (EDT) Subject: Session issues when using Varnish In-Reply-To: <762709593.897379.1300329124628.JavaMail.root@mail-01.sbgnet.com> Message-ID: <1185929555.897407.1300329616885.JavaMail.root@mail-01.sbgnet.com> I agree that this is a better expression to use if you are only testing one set of extensions and don't intend to do anything else with the extension itself. Using the same method: ( if you want to capture the extension for some reason ) set req.http.ext = regsub( req.url, "^[^\?]*?\.([a-zA-Z]+)($|\?)", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } I also have not tested this; but it should work assuming the other example works. ----- Original Message ----- From: "Ken Brownfield" To: varnish-misc at varnish-cache.org Sent: Wednesday, March 16, 2011 6:58:31 PM Subject: Re: Session issues when using Varnish Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD < straightflush at gmail.com > wrote: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { remove req.http.ext; return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer < perbu at varnish-software.com > wrote: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski < dhelkowski at sbgnet.com > wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ... > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > return(lookup); > } > ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From weipeng.pengw at alibaba-inc.com Thu Mar 17 04:01:01 2011 From: weipeng.pengw at alibaba-inc.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 17 Mar 2011 11:01:01 +0800 Subject: ESI problem in Red Hat Enterprise Linux Message-ID: hi all: i install varnish using the source code "varnish-2.1.4.tar.gz" in ubuntu10.4 and "Red Hat Enterprise Linux Server release 5.3 (Tikanga)" when i use ESI in ubuntu, it's ok, both the main page and the esi included page can be showed but the same configure file and the same pages in redhat, only the main page can be showed the configure file is as below: backend default { .host = "127.0.0.1"; .port = "80"; } backend javaeye { .host = "www.javaeye.com"; .port = "80"; .connect_timeout = 1s; .first_byte_timeout = 5s; .between_bytes_timeout = 2s; } acl purge { "localhost"; "127.0.0.1"; "192.168.1.0"/24; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/forums/") { set req.backend = javaeye; set req.http.Host="www.javaeye.com"; return (pass); } else { set req.backend = default; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { return (pass); } return (lookup); } sub vcl_pass { return (pass); } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } if (!obj.cacheable) { return (pass); } return (deliver); } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } return (fetch); } sub vcl_fetch { if (req.url ~ "/[a-z0-9]+.html$") { esi; /* Do ESI processing */ } remove beresp.http.Last-Modified; remove beresp.http.Etag; #set beresp.http.Cache-Control="no-cache"; if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } if (req.url ~ "^/[a-z]+/") { /* We only deal with GET and HEAD by default */ return (pass); } return (deliver); } sub vcl_deliver { return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

Guru Meditation:

XID: "} req.xid {"


Varnish cache server

"}; return (deliver); } the main page url: http://10.20.156.7:8000/haha.html the main page content: 123haha111 please help me! thanks ! Regards! pwlazy ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Thu Mar 17 16:59:59 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Thu, 17 Mar 2011 11:59:59 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: FYI - I forwarded Ken's suggested solution to our Rackspace tech who updated our config. This appears to have resolved our issue. Thanks! Chris Bloom Internet Application Developer On Wed, Mar 16, 2011 at 6:58 PM, Ken Brownfield wrote: > Or not set a header at all: > > if ( req.url ~ > "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { > unset req.http.cookie > return(lookup); > } > > Didn't test the regex with Varnish's regex handling. > -- > kb > > > > On Wed, Mar 16, 2011 at 12:30, AD wrote: > >> You can remove the header so it doesnt get set >> >> set req.http.ext = regsub( req.url, "\?.+$", "" ); >> set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> if( req.http.ext ~ >> "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> * remove req.http.ext; * >> return(lookup); >> } >> >> >> >> On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: >> >>> Hi David, List. >>> >>> I think I'll use this snipplet in the documentation if you don't mind. >>> I need to work in more regsub calls there anyway. >>> >>> Cheers, >>> >>> Per. >>> >>> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >>> wrote: >>> > The vcl you are showing may be standard, but as you have noticed it >>> will not >>> > work properly when >>> > query strings end in a file extension. I encountered this same problem >>> after >>> > blindly copying from >>> > example varnish configurations. >>> > Before the check is done, the query parameter needs to be stripped from >>> the >>> > url. >>> > Example of an alternate way to check the extensions: >>> > >>> > sub vcl_recv { >>> > ... >>> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >>> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" >>> ); >>> > if( req.http.ext ~ >>> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >>> > return(lookup); >>> > } >>> > ... >>> > } >>> > >>> > Doubtless others will say this approach is wrong for some reason or >>> another. >>> > I use it in a production >>> > environment and it works fine though. Pass it along to your hosting >>> provider >>> > and request that they >>> > consider changing their config. >>> > >>> > Note that the above code will cause the end user to receive a 'ext' >>> header >>> > with the file extension. >>> > You can add a 'remove req.http.ext' after the code if you don't want >>> that to >>> > happen... >>> > >>> > Another thing to consider is that whether it this is a bug or not; it >>> is a >>> > common problem with varnish >>> > configurations, and as such can be used on most varnish servers to >>> force >>> > them to return things >>> > differently then they normally would. IE: if some backend script is a >>> huge >>> > request and eats up resources, sending >>> > it a '?.jpg' could be used to hit it repeatedly and bring about a >>> denial of >>> > service. >>> > >>> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >>> > >>> > Thank you, Bjorn, for your response. >>> > Our hosting provider tells me that the following routines have been >>> added to >>> > the default config. >>> > sub vcl_recv { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.cookie; >>> > return (lookup); >>> > } >>> > } >>> > sub vcl_fetch { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.set-cookie; >>> > set obj.ttl = 1h; >>> > } >>> > } >>> > Clearly the req.url variable contains the entire request URL, including >>> the >>> > querystring. Is there another variable that I should be using instead >>> that >>> > would only include the script name? If this is the default behavior, >>> I'm >>> > inclined to cry "bug". >>> > You can test that other script for yourself by substituting >>> > maxisavergroup.com for the domain in the example URLs I provided. >>> > PS: We are using Varnish 2.0.6 >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> >>> >>> >>> -- >>> Per Buer, Varnish Software >>> Phone: <%2B47%2021%2098%2092%2061>+47 21 98 92 61 / Mobile: >>> <%2B47%20958%2039%20117>+47 958 39 117 / Skype: per.buer >>> Varnish makes websites fly! >>> Want to learn more about Varnish? >>> http://www.varnish-software.com/whitepapers >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.begumisa at gmail.com Fri Mar 18 02:24:19 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Thu, 17 Mar 2011 18:24:19 -0700 Subject: Request body of POST Message-ID: Is there anyway I can see the request body of a POST in the varnish logs generated from running the varnishlog command? Thanks. Best Regards, Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 18 09:22:46 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 18 Mar 2011 09:22:46 +0100 Subject: Request body of POST In-Reply-To: (Joseph Begumisa's message of "Thu, 17 Mar 2011 18:24:19 -0700") References: Message-ID: <87y64ckfc9.fsf@qurzaw.varnish-software.com> ]] Joseph Begumisa Hi, | Is there anyway I can see the request body of a POST in the varnish logs | generated from running the varnishlog command? Thanks. No. Use tcpdump or wireshark/tshark to get at that. regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From j.begumisa at gmail.com Fri Mar 18 18:16:54 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Fri, 18 Mar 2011 10:16:54 -0700 Subject: Request body of POST In-Reply-To: <87y64ckfc9.fsf@qurzaw.varnish-software.com> References: <87y64ckfc9.fsf@qurzaw.varnish-software.com> Message-ID: Thanks. Best Regards, Joseph On Fri, Mar 18, 2011 at 1:22 AM, Tollef Fog Heen < tfheen at varnish-software.com> wrote: > ]] Joseph Begumisa > > Hi, > > | Is there anyway I can see the request body of a POST in the varnish logs > | generated from running the varnishlog command? Thanks. > > No. > > Use tcpdump or wireshark/tshark to get at that. > > regards, > -- > Tollef Fog Heen > Varnish Software > t: +47 21 98 92 64 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Mar 20 22:12:32 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:12:32 +0000 Subject: Are varnish subroutines reentrant? Message-ID: Would I be correct in assuming that any subroutines not using inline C are reentrant? I'm talking about non-defaulted, site-specific subroutines here, not vcl_* ones, as I presume the question is possibly meaningless for the vcl_* set. Many thanks, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:26:58 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:26:58 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:12:32 GMT." Message-ID: <98274.1300656418@critter.freebsd.dk> In message , Jona than Matthews writes: >Would I be correct in assuming that any subroutines not using inline C >are reentrant? >I'm talking about non-defaulted, site-specific subroutines here, not >vcl_* ones, as I presume the question is possibly meaningless for the >vcl_* set. It would probably be a lot more easy to answer, if you told me the names of the subroutines you are interested in. In general, reentrancy is highly variable in Varnish. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:39:22 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:39:22 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <98274.1300656418@critter.freebsd.dk> References: <98274.1300656418@critter.freebsd.dk> Message-ID: On 20 March 2011 21:26, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: >>Would I be correct in assuming that any subroutines not using inline C >>are reentrant? >>I'm talking about non-defaulted, site-specific subroutines here, not >>vcl_* ones, as I presume the question is possibly meaningless for the >>vcl_* set. > > It would probably be a lot more easy to answer, if you told me the > names of the subroutines you are interested in. They're ones that I'm defining in my VCL. They're site-specific helper functions that don't exist in the default VCL. I'm not asking for an analysis of the reentrant nature of a specific algorithm or block of code, just to know if there's anything underlying the VCL at any specific points in the route through the standard subroutines that would make being reentrant more complex to deal with than solely making sure the algorithm is reentrant. If that makes sense :-) Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:43:49 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:43:49 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:39:22 GMT." Message-ID: <37820.1300657429@critter.freebsd.dk> In message , Jona than Matthews writes: >just to know if there's anything >underlying the VCL at any specific points in the route through the >standard subroutines that would make being reentrant more complex to >deal with than solely making sure the algorithm is reentrant. If that >makes sense :-) As long as you take care of the usual stuff, (static/global variables etc) there shouldn't be any issues. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:58:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:58:14 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <37820.1300657429@critter.freebsd.dk> References: <37820.1300657429@critter.freebsd.dk> Message-ID: On 20 March 2011 21:43, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: > >>just to know if there's anything >>underlying the VCL at any specific points in the route through the >>standard subroutines that would make being reentrant more complex to >>deal with than solely making sure the algorithm is reentrant. ?If that >>makes sense :-) > > As long as you take care of the usual stuff, (static/global variables > etc) there shouldn't be any issues. Many thanks. Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From krjeschke at omniti.com Fri Mar 18 20:18:10 2011 From: krjeschke at omniti.com (Katherine Jeschke) Date: Fri, 18 Mar 2011 15:18:10 -0400 Subject: Surge 2011 Conference CFP Message-ID: We are excited to announce Surge 2011, the Scalability and Performance Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses on case studies that demonstrate successes (and failures) in Web applications and Internet architectures. This year, we're adding Hack Day on September 28th. The inaugural, 2010 conference (http://omniti.com/surge/2010) was a smashing success and we are currently accepting submissions for papers through April 3rd. You can find more information about topics online: http://omniti.com/surge/2011 2010 attendees compared Surge to the early days of Velocity, and our speakers received 3.5-4 out of 4 stars for quality of presentation and quality of content! Nearly 90% of first-year attendees are planning to come again in 2011. For more information about the CFP or sponsorship of the event, please contact us at surge (AT) omniti (DOT) com. -- Katherine Jeschke Marketing Director OmniTI Computer Consulting, Inc. 7070 Samuel Morse Drive, Ste.150 Columbia, MD 21046 O: 410/872-4910, 222 C: 443/643-6140 omniti.com circonus.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Mar 21 16:08:45 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 21 Mar 2011 15:08:45 +0000 Subject: Warming the cache from an existing squid proxy instance Message-ID: Hi all - I've got some long-running squid instances, mainly used for caching medium-sized binaries, which I'd like to replace with some varnish instances. The binaries are quite heavy to regenerate on the distant origin servers and there's a large number of them. Hence, I'd like to use the squid cache as a target to warm a (new, nearby) varnish instance instead of just pointing the varnish instance at the remote origin servers. The squid instances are running in proxy mode, and require (I *believe*) an HTTP CONNECT. I've looked around for people trying the same thing, but haven't come across any success stories. I'm perfectly prepared to be told that I simply have to reconfigure the squid instances in mixed proxy/origin-server mode, and that there's no way around it, but I thought I'd ask the list for guidance first ... Any thoughts? Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From scott at dor.ky Mon Mar 21 22:10:09 2011 From: scott at dor.ky (Scott Wilcox) Date: Mon, 21 Mar 2011 21:10:09 +0000 Subject: Using Varnish with SSL Message-ID: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Hello folks, I've recently been looking at introducing Varnish into my current fronted system. From what I've seen and in my own testing, I've been very impressed with the performance gains. One question I do have, is about using SSL with Varnish. I'll be using Varnish to push over to an Apache server which runs on :80 and :443 at present, serving also identical content (if needed for simplicity, these can be merged). What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. Thoughts, comments and suggestions all most welcome! Scott. From perbu at varnish-software.com Mon Mar 21 22:37:42 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 21 Mar 2011 22:37:42 +0100 Subject: Using Varnish with SSL In-Reply-To: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: Hi Scott. On Mon, Mar 21, 2011 at 10:10 PM, Scott Wilcox wrote: > > What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. www.varnish-cache.org and www.varnish-software.com are running a hidden apache (w/PHP) behind Varnish. On port 443 there is a minimalistic nginx which does the SSL stuff and connects to Varnish. It works well. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Tue Mar 22 02:49:18 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 21:49:18 -0400 Subject: obj.ttl not available in vcl_deliver Message-ID: Hello, Per the docs it says that all the obj.* values should be available in vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get the following error: Variable 'obj.ttl' not accessible in method 'vcl_deliver'. This is on Ubuntu, Varnish 2.1.5. Any ideas ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Tue Mar 22 03:30:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Mon, 21 Mar 2011 19:30:24 -0700 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: Per lots of posts on this list, obj is now baresp in newer Varnish versions. It sounds like the documentation for this change hasn't been fully propagated. -- kb On Mon, Mar 21, 2011 at 18:49, AD wrote: > Hello, > > Per the docs it says that all the obj.* values should be available in > vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get > the following error: > > Variable 'obj.ttl' not accessible in method 'vcl_deliver'. > > This is on Ubuntu, Varnish 2.1.5. Any ideas ? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Tue Mar 22 03:39:50 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 22:39:50 -0400 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: hmm, it seems beresp.* is available in vcl_fetch, but not vcl_deliver. I need obj.ttl in vcl_deliver (to get the TTL as it is in the cache, not from the backend). On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > Per lots of posts on this list, obj is now baresp in newer Varnish > versions. It sounds like the documentation for this change hasn't been > fully propagated. > -- > kb > > > > On Mon, Mar 21, 2011 at 18:49, AD wrote: > >> Hello, >> >> Per the docs it says that all the obj.* values should be available in >> vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get >> the following error: >> >> Variable 'obj.ttl' not accessible in method 'vcl_deliver'. >> >> This is on Ubuntu, Varnish 2.1.5. Any ideas ? >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Tue Mar 22 10:01:45 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Tue, 22 Mar 2011 10:01:45 +0100 Subject: Using Varnish with SSL In-Reply-To: References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Hi Per, > > What I'd like to know is the best way to configure this (and if its possible > actually). I very much need to keep SSL access open, I realise that I could just > run apache 'native' on :443, but I'd be a lot happier if I can push it through > Varnish. > > www.varnish-cache.org and www.varnish-software.com are running a > hidden apache (w/PHP) behind Varnish. On port 443 there is a > minimalistic nginx which does the SSL stuff and connects to Varnish. > It works well. So you're routing all SSL (port 443) via Nginx- > to Varnish -> to Apache? Meaning your nginx is covering the SSL certificates, and your backend is only getting "normal" unencrypted hits? How does that translate to performance? Are you losing a lot by passing it all via nginx first? It's an interesting discussion, I'd love to hear more on the "best practice" implementation of this to get the most performance gain. Regards, Mattias From perbu at varnish-software.com Tue Mar 22 10:25:33 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 22 Mar 2011 10:25:33 +0100 Subject: Using Varnish with SSL In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Message-ID: On Tue, Mar 22, 2011 at 10:01 AM, Mattias Geniar wrote: >> www.varnish-cache.org and www.varnish-software.com are running a >> hidden apache (w/PHP) behind Varnish. On port 443 there is a >> minimalistic nginx which does the SSL stuff and connects to Varnish. >> It works well. > > So you're routing all SSL (port 443) via Nginx- > to Varnish -> to > Apache? Yes. Varnish on port 80 with a Apache backend at some other port on loopback. > Meaning your nginx is covering the SSL certificates, and your > backend is only getting "normal" unencrypted hits? Yes. > How does that translate to performance? Are you losing a lot by passing > it all via nginx first? Not really. There is some HTTP header processing that is unnecessary and that could have been saved if SSL was native in Varnish but all in all, with Varnish you usually have a lot of CPU to spare. I remember a couple of years back we where running the same stack and thousands of hits per second without any issues. > It's an interesting discussion, I'd love to hear more on the "best > practice" implementation of this to get the most performance gain. SSL used to be very expensive. It isn't anymore. There have been good advances in both hardware and software so SSL rather cheap. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From s.welschhoff at lvm.de Tue Mar 22 10:42:54 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Tue, 22 Mar 2011 10:42:54 +0100 Subject: Two Different Backends Message-ID: Hello, I want to configure varnish with two different backends. But with my configuration varnish can't handle with both. sub vcl_recv { if (req.url ~"^/partner/") { set req.backend = directory1; set req.http.host = "partnerservicesq00.xxx.de"; } if (req.url ~"^/schaden/") { set req.backend = directory2; set req.http.host = "servicesq00.xxx.de"; } else { set req.backend = default; } } When I take only the first server and comment the second out it works. But I want to have both. Kind regards Stefan Welschhoff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From varnish at mm.quex.org Tue Mar 22 10:54:17 2011 From: varnish at mm.quex.org (Michael Alger) Date: Tue, 22 Mar 2011 17:54:17 +0800 Subject: Two Different Backends In-Reply-To: References: Message-ID: <20110322095417.GA26096@grum.quex.org> On Tue, Mar 22, 2011 at 10:42:54AM +0100, Stefan Welschhoff wrote: > > I want to configure varnish with two different backends. But with > my configuration varnish can't handle with both. There is a logic error here: > if (req.url ~"^/partner/") > { > set req.backend = directory1; > set req.http.host = "partnerservicesq00.xxx.de"; > } The above if-clause will be run, and then, regardless of the outcome, the next if-else-clause will be run: > if (req.url ~"^/schaden/") > { > set req.backend = directory2; > set req.http.host = "servicesq00.xxx.de"; > } > else > { > set req.backend = default; > } This means that if the URL matched /partner/ the backend will get set to back to default, because it falls through to the "else". I think you want your second if for /schaden/ to be an elsif. if (req.url ~ "^/partner/") { } elsif (req.url ~ "^/schaden/") { } else { } If that's not the problem you're having, please provide some more information, i.e. backend configuration and error messages if any, or the expected and actual result. From cdgraff at gmail.com Wed Mar 23 04:04:51 2011 From: cdgraff at gmail.com (Alejandro) Date: Wed, 23 Mar 2011 00:04:51 -0300 Subject: VarnishLog: Broken pipe (Debug) In-Reply-To: References: Message-ID: Hi guys, Some one can help with this? I have the same issue in the logs. Regards, Alejandro El 15 de marzo de 2011 10:51, Roberto O. Fern?ndez Crisial < roberto.fernandezcrisial at gmail.com> escribi?: > Hi guys, > > I need some help and I think you can help me. A few days ago I was realized > that Varnish is showing some error messages when debug mode is enable on > varnishlog: > > 4741 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2959 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2591 Debug c "Write error, retval = -1, len = 168289, errno = > Broken pipe" > 3517 Debug c "Write error, retval = -1, len = 114421, errno = > Broken pipe" > > I want to know what are those error messages and why are they generated. > Any suggestion? > > Thank you! > Roberto O. Fern?ndez Crisial > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 04:44:18 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:44:18 -0400 Subject: Child Process Killed Message-ID: The child process got killed abruptly. I am attaching a bunch of munin graphs, relevant syslog, the current varnishstat -1 output. I am running Varnish 2.1.5 on a 64 bit machine with the following command: sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 My VCL is fairly simple, and I think has nothing to do with the error. Any help would be appreciated. -T -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish.munin.tz Type: application/octet-stream Size: 88817 bytes Desc: not available URL: From nadahalli at gmail.com Wed Mar 23 04:46:05 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:46:05 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Resending the other attachments (syslog and varnishstat) -T On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > The child process got killed abruptly. > > I am attaching a bunch of munin graphs, relevant syslog, the current > varnishstat -1 output. > > I am running Varnish 2.1.5 on a 64 bit machine with the following command: > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 > > My VCL is fairly simple, and I think has nothing to do with the error. > > Any help would be appreciated. > > -T > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 5409469 482.69 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5409469 482.69 Client requests received cache_hit 5358032 478.10 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 51434 4.59 Cache misses backend_conn 51434 4.59 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 51433 4.59 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 200 . N struct sess_mem n_sess 100 . N struct sess n_object 45560 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 45669 . N struct objectcore n_objecthead 45673 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 200 . N worker threads n_wrk_create 200 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 28 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 5763 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 298470 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5409362 482.68 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 5409469 482.69 Total Sessions s_req 5409469 482.69 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 51433 4.59 Total fetch s_hdrbytes 1189049759 106098.85 Total header bytes s_bodybytes 5149727833 459509.93 Total body bytes sess_closed 5409469 482.69 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 226158115 20180.08 SHM records shm_writes 21752857 1941.01 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 27172 2.42 SHM MTX contention shm_cycles 97 0.01 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 102756 9.17 SMA allocator requests sma_nobj 91120 . SMA outstanding allocations sma_nbytes 72897093 . SMA outstanding bytes sma_balloc 82131133 . SMA bytes allocated sma_bfree 9234040 . SMA bytes free sms_nreq 1 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 418 . SMS bytes allocated sms_bfree 418 . SMS bytes freed backend_req 51434 4.59 Backend requests made n_vcl 9 0.00 N vcl total n_vcl_avail 9 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 155 . N total active purges n_purge_add 155 0.01 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 43087 3.84 N objects tested n_purge_re_test 561069 50.06 N regexps tested against n_purge_dups 140 0.01 N duplicate purges removed hcb_nolock 5409434 482.68 HCB Lookups without lock hcb_lock 45671 4.08 HCB Lookups with lock hcb_insert 45671 4.08 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11207 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858414] python invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858420] python cpuset=/ mems_allowed=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858424] Pid: 5766, comm: python Not tainted 2.6.32-305-ec2 #9-Ubuntu Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858426] Call Trace: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858436] [] ? cpuset_print_task_mems_allowed+0x8c/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858442] [] oom_kill_process+0xe3/0x210 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858445] [] __out_of_memory+0x50/0xb0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858448] [] out_of_memory+0x5f/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858451] [] __alloc_pages_slowpath+0x4c1/0x560 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858455] [] __alloc_pages_nodemask+0x171/0x180 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858458] [] __do_page_cache_readahead+0xd7/0x220 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858461] [] ra_submit+0x1c/0x20 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858464] [] filemap_fault+0x3fe/0x450 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858468] [] __do_fault+0x50/0x680 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858470] [] handle_mm_fault+0x260/0x4f0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858476] [] do_page_fault+0x147/0x390 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858479] [] page_fault+0x28/0x30 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858481] Mem-Info: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858483] DMA per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858484] CPU 0: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858486] CPU 1: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858487] DMA32 per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858489] CPU 0: hi: 155, btch: 38 usd: 146 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858491] CPU 1: hi: 155, btch: 38 usd: 178 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858492] Normal per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858493] CPU 0: hi: 155, btch: 38 usd: 136 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858495] CPU 1: hi: 155, btch: 38 usd: 43 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858499] active_anon:1561108 inactive_anon:312311 isolated_anon:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858500] active_file:133 inactive_file:251 isolated_file:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] unevictable:0 dirty:9 writeback:0 unstable:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] free:10533 slab_reclaimable:711 slab_unreclaimable:7610 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858503] mapped:104 shmem:46 pagetables:0 bounce:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858508] DMA free:16384kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:16160kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858513] lowmem_reserve[]: 0 4024 7559 7559 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858519] DMA32 free:19904kB min:5916kB low:7392kB high:8872kB active_anon:3246376kB inactive_anon:649464kB active_file:0kB inactive_file:448kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:4kB writeback:0kB mapped:164kB shmem:16kB slab_reclaimable:212kB slab_unreclaimable:5428kB kernel_stack:112kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:59 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858524] lowmem_reserve[]: 0 0 3534 3534 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858530] Normal free:5844kB min:5196kB low:6492kB high:7792kB active_anon:2998056kB inactive_anon:599780kB active_file:532kB inactive_file:556kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3619728kB mlocked:0kB dirty:32kB writeback:0kB mapped:252kB shmem:168kB slab_reclaimable:2632kB slab_unreclaimable:25012kB kernel_stack:2272kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:672 all_unreclaimable? no Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858534] lowmem_reserve[]: 0 0 0 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858536] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 4*4096kB = 16384kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858543] DMA32: 2942*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 19904kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858549] Normal: 471*4kB 3*8kB 6*16kB 2*32kB 3*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 5844kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858555] 477 total pagecache pages Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858557] 0 pages in swap cache Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858559] Swap cache stats: add 0, delete 0, find 0/0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858560] Free swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858561] Total swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882870] 1968128 pages RAM Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882873] 61087 pages reserved Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882874] 1106 pages shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882875] 1894560 pages non-shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882878] Out of memory: kill process 1491 (varnishd) score 2838972 or a child Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882892] Killed process 1492 (varnishd) Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (1492) died signal=9 Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child cleanup complete Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: child (21675) Started Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Child starts From nadahalli at gmail.com Wed Mar 23 04:48:35 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:48:35 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I am running my Python origin-server on the same machine. It seems like the Python interpreter caused the OOM killer to kill Varnish. If that's the case, is there anything I can do prevent this from happening? -T On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > Resending the other attachments (syslog and varnishstat) > > -T > > > On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > >> The child process got killed abruptly. >> >> I am attaching a bunch of munin graphs, relevant syslog, the current >> varnishstat -1 output. >> >> I am running Varnish 2.1.5 on a 64 bit machine with the following command: >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >> >> My VCL is fairly simple, and I think has nothing to do with the error. >> >> Any help would be appreciated. >> >> -T >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 05:27:48 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 00:27:48 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I found a couple of other threads involving the OOM killer. http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-April/002722.html http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-June/002838.html In both these cases, they had quite a few purge requests which added purge records which never got expired and that might have caused the out of control memory growth. I have a similar situation - with purges happening every 15 minutes. Mar 22 06:31:35 ip-10-116-105-253 varnishd[1491]: CLI telnet 127.0.0.1 60642 127.0.0.1 2000 Rd purge req.url ~ ^/\\?idsite=18&url=http%3A%2F% 2Fwww.people.com%2Fpeople%2Farticle%2F These are essentially the 'same' purges that are fired every 15 minutes. Do I have to setup the ban lurker to avoid out of control memory growth? -T On Tue, Mar 22, 2011 at 11:48 PM, Tejaswi Nadahalli wrote: > I am running my Python origin-server on the same machine. It seems like the > Python interpreter caused the OOM killer to kill Varnish. If that's the > case, is there anything I can do prevent this from happening? > > -T > > > On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > >> Resending the other attachments (syslog and varnishstat) >> >> -T >> >> >> On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: >> >>> The child process got killed abruptly. >>> >>> I am attaching a bunch of munin graphs, relevant syslog, the current >>> varnishstat -1 output. >>> >>> I am running Varnish 2.1.5 on a 64 bit machine with the following >>> command: >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >>> >>> My VCL is fairly simple, and I think has nothing to do with the error. >>> >>> Any help would be appreciated. >>> >>> -T >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 23 08:11:52 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 23 Mar 2011 08:11:52 +0100 Subject: Child Process Killed In-Reply-To: (Tejaswi Nadahalli's message of "Tue, 22 Mar 2011 23:48:35 -0400") References: Message-ID: <874o6uz4xz.fsf@qurzaw.varnish-software.com> ]] Tejaswi Nadahalli | I am running my Python origin-server on the same machine. It seems like the | Python interpreter caused the OOM killer to kill Varnish. If that's the | case, is there anything I can do prevent this from happening? Add more memory, don't leak memory in your python process, limit the amount of memory varnish uses, add swap or change the oom score for varnish? -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From andrea.campi at zephirworks.com Wed Mar 23 16:08:06 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Wed, 23 Mar 2011 16:08:06 +0100 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated Message-ID: Hi, I am currently working with a client to implement ESI + gzip with trunk Varnish; since phk asked for help in breaking it, here we are :) Some background: the customer is a publishing company and we are working on the website for their daily newspaper, so ease of integration with their CMS and timely expiration of ESI fragments is paramount. Because of this, I'm using the classic technique of having the page esi:include a document with very short TTL, that in turn esi:includes the real fragment (that has a long TTL), including in the URL the last-modification TTL. So we have something like: index.shtml -> /includes2010/header.esi/homepage -> /includes2010/header.shtml/homepage This works nicely when I strip the Accept-Encoding header, on both 2.1.5 and trunk. But it breaks down with gzip compression on: Safari and Chrome give up at the point where the first ESI include is, Firefox mostly just errors out; all of them sometimes provide vague errors. The best info I have is from: "curl | zip" gzip: out: invalid compressed data--format violated Unsetting bereq.http.accept-encoding on the first ESI request didn't help; unsetting it on the second request *did* help, fixing the issue for all browsers. Setting TTL=0 for /includes2010/header.shtml/homepage didn't make a difference, nor did changing vcl_recv to return(pass), so it seems it's not a matter of what is stored in the cache. [.... a couple of hours later ....] Long story short, I finally realized the problem is not with Varnish per se, but with the office proxy (Squid 2.7.STABLE9); it seems to corrupt the gzip stream just after the 00 00 FF FF sequence: -0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc +0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc -0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 +0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 and so on. However, what I wrote above is still true: if I only have one level of ESI include, or if I have two but the inner one is not originally gzip, Squid doesn't corrupt the content. I have a few gzipped files, as well as sample vcl and html files (not that these matter after all), I can send them if those would help. Andrea From ronan at iol.ie Wed Mar 23 17:25:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 23 Mar 2011 16:25:43 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > | The current load is light (up to a couple of thousand active sessions, > | peak is around five thousand). Varnish has a fairly simple config with > | a director consisting of two Apache backends: > > This looks a bit odd: > > | backend backend1 { > | .host = "1.2.3.4"; > | .port = "80"; > | .connect_timeout = 5s; > | .first_byte_timeout = 90s; > | .between_bytes_timeout = 90s; > | A typical request is below. The first attempt fails with: > | > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | there is presumably a restart and the second attempt (sometimes to > | backend1, sometimes backend2) fails with: > | > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? Varnishlog and output from TCP for a typical occurance is below. If you need any further details let me know. -Ronan 16 ReqStart c 202.168.71.170 39173 403520520 16 RxRequest c POST 16 RxURL c /ajax.php 16 RxProtocol c HTTP/1.1 16 RxHeader c Via: 1.1 APSRVMY35001 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 16 RxHeader c Referer: http://www.redcafe.net/f8/ 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 16 RxHeader c Host: www.redcafe.net 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Accept-Language: en-gb,en;q=0.5 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 RxHeader c Keep-Alive: 115 16 RxHeader c X-Requested-With: XMLHttpRequest 16 RxHeader c Pragma: no-cache 16 RxHeader c Cache-Control: no-cache 16 RxHeader c Connection: Keep-Alive 16 RxHeader c Content-Length: 82 16 VCL_call c recv 16 VCL_return c pass 16 VCL_call c hash 16 VCL_return c hash 16 VCL_call c pass 16 VCL_return c pass 16 Backend c 53 redcafe redcafe1 53 TxRequest b POST 53 TxURL b /ajax.php 53 TxProtocol b HTTP/1.1 53 TxHeader b Via: 1.1 APSRVMY35001 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 53 TxHeader b Referer: http://www.redcafe.net/f8/ 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 53 TxHeader b Host: www.redcafe.net 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 53 TxHeader b Accept-Language: en-gb,en;q=0.5 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 53 TxHeader b X-Requested-With: XMLHttpRequest 53 TxHeader b Pragma: no-cache 53 TxHeader b Cache-Control: no-cache 53 TxHeader b Content-Length: 82 53 TxHeader b X-Forwarded-For: 202.168.71.170 53 TxHeader b X-Varnish: 403520520 16 FetchError c http first read error: -1 0 (Success) 53 BackendClose b redcafe1 16 Backend c 52 redcafe redcafe2 52 TxRequest b POST 52 TxURL b /ajax.php 52 TxProtocol b HTTP/1.1 52 TxHeader b Via: 1.1 APSRVMY35001 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 52 TxHeader b Referer: http://www.redcafe.net/f8/ 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 52 TxHeader b Host: www.redcafe.net 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 52 TxHeader b Accept-Language: en-gb,en;q=0.5 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 52 TxHeader b X-Requested-With: XMLHttpRequest 52 TxHeader b Pragma: no-cache 52 TxHeader b Cache-Control: no-cache 52 TxHeader b Content-Length: 82 52 TxHeader b X-Forwarded-For: 202.168.71.170 52 TxHeader b X-Varnish: 403520520 16 FetchError c backend write error: 11 (Resource temporarily unavailable) 52 BackendClose b redcafe2 16 VCL_call c error 16 VCL_return c deliver 16 VCL_call c deliver 16 VCL_return c deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Retry-After: 0 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Content-Length: 2623 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT 16 TxHeader c X-Varnish: 403520520 16 TxHeader c Age: 2 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: close 16 Length c 2623 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 16 SessionClose c error 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 First attempt (redcafe1 backend) 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ 0x0030: 101a 178d .... 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke 0x04b0: 6e n 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c 0x0030: 101a 178d .... 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... 0x0030: 0c9d 9863 ...c Second attempt (redcafe2 backend) 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W 0x0030: 1019 bc2d ...- 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W From phk at phk.freebsd.dk Wed Mar 23 19:24:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 23 Mar 2011 18:24:26 +0000 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated In-Reply-To: Your message of "Wed, 23 Mar 2011 16:08:06 +0100." Message-ID: <13858.1300904666@critter.freebsd.dk> In message , Andr ea Campi writes: >Long story short, I finally realized the problem is not with Varnish >per se, but with the office proxy (Squid 2.7.STABLE9); it seems to >corrupt the gzip stream just after the 00 00 FF FF sequence: > >-0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc >+0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc > >-0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 >+0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 > >and so on. We found a similar issue in ngnix last week: A 1 byte chunked encoding get zap'ed to 0x00 just like what you show. Are you sure there is no ngnix instance involved ? It would be weird of both squid and ngnix has the same bug ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From TFigueiro at au.westfield.com Wed Mar 23 21:33:06 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 07:33:06 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: From: Tejaswi Nadahalli > I am running my Python origin-server on the same machine. It seems like > the Python interpreter caused the OOM killer to kill Varnish. If that's > the case, is there anything I can do prevent this from happening? I've been meaning to write-up a blog entry regarding the OOM killer in Linux (what a dumb idea) but in the mean time this should get you started. The OOM Killer is there because Linux, by default in most distros, allocates more memory than available (swap+ram) on the assumption that applications will never need it (this is called overcommiting). Mostly this is true but when it's not the oom_kill is called to free-up some memory so the kernel can keep its promise. Usually it does a shit job (as you just noticed) and I hate it so much. One way to solve this is to tweak oom_kill so it doesn't kill varnish processes. It's a bit cumbersome because you need to do that based on the PID, which you only learn after the process has started, leaving room for some nifty race conditions. Still, adding these to Varnish's init scripts should do what you need - look up online for details. The other way is to disable memory overcommit. Add to /etc/sysctl.conf: # Disables memory overcommit vm.overcommit_memory = 2 # Tweak to fool VM (read manual for setting above) vm.overcommit_ratio = 100 # swap only if really needed vm.swappiness = 10 and sudo /sbin/sysctl -e -p /etc/sysctl.conf The problem with setting overcommit_memory to 2 is that the VM will not allocate more memory than you have available (the actual rule is a function of RAM, swap and overcommit_ratio, hence the tweak above). This could be a problem for Varnish depending on the storage used. The file storage will mmap the file, resulting in a VM size as large as the file. If you don't have enough RAM the kernel will deny memory allocation and varnish will fail to start. At this point you either buy more RAM or tweak your swap size to account for greedy processes (ie.: processes that allocate a lot of memory but never use it). TL;DR: buy more memory; get rid of memory hungry scripts in your varnish box Good luck. ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nadahalli at gmail.com Wed Mar 23 22:10:20 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 17:10:20 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Thanks for the detailed explanation of why the OOM Killer strikes. I have dome some reading about it, and am tinkering with how to stop it from killing varnishd. What I am curious about is - how did the OOM killer get invoked at all? My python process is fairly basic, and wouldn't have consumed any memory at all. When varnish reaches it's malloc limit, I thought cached objects would start getting Nuked. My LRU nuke counters were 0 through the process. So, instead of nuking objects gracefully, I had a varnish-child-restart. This is what I am worried about. If I can get nuking by reducing the overall memory footprint by reducing the malloc limits, I will gladly do it. Do you think that might help? -T On Wed, Mar 23, 2011 at 4:33 PM, Thiago Figueiro wrote: > From: Tejaswi Nadahalli > > I am running my Python origin-server on the same machine. It seems like > > the Python interpreter caused the OOM killer to kill Varnish. If that's > > the case, is there anything I can do prevent this from happening? > > > I've been meaning to write-up a blog entry regarding the OOM killer in > Linux (what a dumb idea) but in the mean time this should get you started. > > The OOM Killer is there because Linux, by default in most distros, > allocates more memory than available (swap+ram) on the assumption that > applications will never need it (this is called overcommiting). Mostly this > is true but when it's not the oom_kill is called to free-up some memory so > the kernel can keep its promise. Usually it does a shit job (as you just > noticed) and I hate it so much. > > One way to solve this is to tweak oom_kill so it doesn't kill varnish > processes. It's a bit cumbersome because you need to do that based on the > PID, which you only learn after the process has started, leaving room for > some nifty race conditions. Still, adding these to Varnish's init scripts > should do what you need - look up online for details. > > The other way is to disable memory overcommit. Add to /etc/sysctl.conf: > > # Disables memory overcommit > vm.overcommit_memory = 2 > # Tweak to fool VM (read manual for setting above) > vm.overcommit_ratio = 100 > # swap only if really needed > vm.swappiness = 10 > > and sudo /sbin/sysctl -e -p /etc/sysctl.conf > > The problem with setting overcommit_memory to 2 is that the VM will not > allocate more memory than you have available (the actual rule is a function > of RAM, swap and overcommit_ratio, hence the tweak above). > > This could be a problem for Varnish depending on the storage used. The > file storage will mmap the file, resulting in a VM size as large as the > file. If you don't have enough RAM the kernel will deny memory allocation > and varnish will fail to start. At this point you either buy more RAM or > tweak your swap size to account for greedy processes (ie.: processes that > allocate a lot of memory but never use it). > > > TL;DR: buy more memory; get rid of memory hungry scripts in your varnish > box > > > Good luck. > > > ______________________________________________________ > CONFIDENTIALITY NOTICE > This electronic mail message, including any and/or all attachments, is for > the sole use of the intended recipient(s), and may contain confidential > and/or privileged information, pertaining to business conducted under the > direction and supervision of the sending organization. All electronic mail > messages, which may have been established as expressed views and/or opinions > (stated either within the electronic mail message or any of its > attachments), are left to the sole responsibility of that of the sender, and > are not necessarily attributed to the sending organization. Unauthorized > interception, review, use, disclosure or distribution of any such > information contained within this electronic mail message and/or its > attachment(s), is (are) strictly prohibited. If you are not the intended > recipient, please contact the sender by replying to this electronic mail > message, along with the destruction all copies of the original electronic > mail message (along with any attachments). > ______________________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From TFigueiro at au.westfield.com Thu Mar 24 03:39:24 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 13:39:24 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: > Do you think that might help? You're looking for /proc/PID/oom_score; here, read this: http://lwn.net/Articles/317814/ Reducing memory usage will help, yes. And what Tollef said in his reply is the practical approach: add ram and/or swap. At some point the sum of the RESIDENT processes memory size is bigger than SWAP+RAM, and this is what triggers oom_kill. The other way around is what you suggested yourself: reduce memory usage. G'luck ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nfn at gmx.com Fri Mar 25 10:47:33 2011 From: nfn at gmx.com (Nuno Neves) Date: Fri, 25 Mar 2011 09:47:33 +0000 Subject: Using cron to purge cache Message-ID: <20110325094733.232990@gmx.com> Hello, I have a file named varnish-purge with this content that it's executed daily by cron, but the objects remain in the cache, even when I run it manually. -------------------------------------------------------------------------------------------- #!/bin/sh /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" -------------------------------------------------------------------------------------------- The cron file is: ----------------------------------- #!/bin/sh /usr/local/bin/varnish-purge ----------------------------------- I alread used: /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge '.*' and /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge . without succes. The only way to purge the cache is restarting varnish. I'm using vanish 2.1.5 from http://repo.varnish-cache.org http://repo.varnish-cache.org/debian/GPG-key.txt Any guidance will be apreciated. Thanks Nuno -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 25 10:54:47 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 25 Mar 2011 10:54:47 +0100 Subject: Using cron to purge cache In-Reply-To: <20110325094733.232990@gmx.com> References: <20110325094733.232990@gmx.com> Message-ID: Hi Nuno. On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: > Hello, > > I have a file named varnish-purge with this content that it's executed daily > by cron, but the objects remain in the cache, even when I run it manually. > -------------------------------------------------------------------------------------------- > #!/bin/sh > > /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" url.purge will create what we call a "ban", or a filter. Think of it as a lazy purge. The objects will remain in memory but killed during lookup. If you want to kill the objects from cache you'd have to set up the ban lurker to walk the objects and expunge them. If you want the objects to actually disappear from memory right away you would have to do a HTTP PURGE call, and setting the TTL to zero, but that means you'd have to kill off every URL in cache. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From ronan at iol.ie Fri Mar 25 11:12:54 2011 From: ronan at iol.ie (Ronan Mullally) Date: Fri, 25 Mar 2011 10:12:54 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: I am still encountering this problem - about 1% on average of POSTs are failing with a 503 when there is no problem apparent on the back-ends. GETs are not affected: Hour GETs Fails POSTs Fails 00:00 38060 0 (0.00%) 480 2 (0.42%) 01:00 34051 0 (0.00%) 412 0 (0.00%) 02:00 29881 0 (0.00%) 383 2 (0.52%) 03:00 25741 0 (0.00%) 374 1 (0.27%) 04:00 22296 0 (0.00%) 326 2 (0.61%) 05:00 22594 0 (0.00%) 349 20 (5.73%) 06:00 31422 0 (0.00%) 408 6 (1.47%) 07:00 58746 0 (0.00%) 656 6 (0.91%) 08:00 74307 0 (0.00%) 870 4 (0.46%) 09:00 87386 0 (0.00%) 1280 8 (0.62%) 10:00 51744 0 (0.00%) 741 8 (1.08%) 11:00 50060 0 (0.00%) 825 1 (0.12%) 12:00 58573 0 (0.00%) 664 5 (0.75%) 13:00 60548 0 (0.00%) 735 7 (0.95%) 14:00 60242 0 (0.00%) 875 8 (0.91%) 15:00 61427 0 (0.00%) 778 3 (0.39%) 16:00 66480 0 (0.00%) 810 4 (0.49%) 17:00 65749 0 (0.00%) 836 12 (1.44%) 18:00 64312 0 (0.00%) 732 3 (0.41%) 19:00 60930 0 (0.00%) 652 5 (0.77%) 20:00 59646 0 (0.00%) 626 1 (0.16%) 21:00 61218 0 (0.00%) 674 3 (0.45%) 22:00 55908 0 (0.00%) 598 3 (0.50%) 23:00 45173 0 (0.00%) 560 1 (0.18%) There was another poster on this thread with the same problem which suggests a possible varnish problem rather than anything specific to my setup. Does anybody have any ideas? -Ronan On Wed, 23 Mar 2011, Ronan Mullally wrote: > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > > > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > > | The current load is light (up to a couple of thousand active sessions, > > | peak is around five thousand). Varnish has a fairly simple config with > > | a director consisting of two Apache backends: > > > > This looks a bit odd: > > > > | backend backend1 { > > | .host = "1.2.3.4"; > > | .port = "80"; > > | .connect_timeout = 5s; > > | .first_byte_timeout = 90s; > > | .between_bytes_timeout = 90s; > > | A typical request is below. The first attempt fails with: > > | > > | 33 FetchError c http first read error: -1 0 (Success) > > > > This just means the backend closed the connection on us. > > > > | there is presumably a restart and the second attempt (sometimes to > > | backend1, sometimes backend2) fails with: > > | > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > > > This is a timeout, however: > > > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > > > That 2.89s backend response time doesn't add up with your timeouts. Can > > you see if you can get a tcpdump of what's going on? > > Varnishlog and output from TCP for a typical occurance is below. If you > need any further details let me know. > > > -Ronan > > 16 ReqStart c 202.168.71.170 39173 403520520 > 16 RxRequest c POST > 16 RxURL c /ajax.php > 16 RxProtocol c HTTP/1.1 > 16 RxHeader c Via: 1.1 APSRVMY35001 > 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 16 RxHeader c Referer: http://www.redcafe.net/f8/ > 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 16 RxHeader c Host: www.redcafe.net > 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 16 RxHeader c Accept-Language: en-gb,en;q=0.5 > 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 16 RxHeader c Keep-Alive: 115 > 16 RxHeader c X-Requested-With: XMLHttpRequest > 16 RxHeader c Pragma: no-cache > 16 RxHeader c Cache-Control: no-cache > 16 RxHeader c Connection: Keep-Alive > 16 RxHeader c Content-Length: 82 > 16 VCL_call c recv > 16 VCL_return c pass > 16 VCL_call c hash > 16 VCL_return c hash > 16 VCL_call c pass > 16 VCL_return c pass > 16 Backend c 53 redcafe redcafe1 > 53 TxRequest b POST > 53 TxURL b /ajax.php > 53 TxProtocol b HTTP/1.1 > 53 TxHeader b Via: 1.1 APSRVMY35001 > 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 53 TxHeader b Referer: http://www.redcafe.net/f8/ > 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 53 TxHeader b Host: www.redcafe.net > 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 53 TxHeader b Accept-Language: en-gb,en;q=0.5 > 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 53 TxHeader b X-Requested-With: XMLHttpRequest > 53 TxHeader b Pragma: no-cache > 53 TxHeader b Cache-Control: no-cache > 53 TxHeader b Content-Length: 82 > 53 TxHeader b X-Forwarded-For: 202.168.71.170 > 53 TxHeader b X-Varnish: 403520520 > 16 FetchError c http first read error: -1 0 (Success) > 53 BackendClose b redcafe1 > 16 Backend c 52 redcafe redcafe2 > 52 TxRequest b POST > 52 TxURL b /ajax.php > 52 TxProtocol b HTTP/1.1 > 52 TxHeader b Via: 1.1 APSRVMY35001 > 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 52 TxHeader b Referer: http://www.redcafe.net/f8/ > 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 52 TxHeader b Host: www.redcafe.net > 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 52 TxHeader b Accept-Language: en-gb,en;q=0.5 > 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 52 TxHeader b X-Requested-With: XMLHttpRequest > 52 TxHeader b Pragma: no-cache > 52 TxHeader b Cache-Control: no-cache > 52 TxHeader b Content-Length: 82 > 52 TxHeader b X-Forwarded-For: 202.168.71.170 > 52 TxHeader b X-Varnish: 403520520 > 16 FetchError c backend write error: 11 (Resource temporarily unavailable) > 52 BackendClose b redcafe2 > 16 VCL_call c error > 16 VCL_return c deliver > 16 VCL_call c deliver > 16 VCL_return c deliver > 16 TxProtocol c HTTP/1.1 > 16 TxStatus c 503 > 16 TxResponse c Service Unavailable > 16 TxHeader c Server: Varnish > 16 TxHeader c Retry-After: 0 > 16 TxHeader c Content-Type: text/html; charset=utf-8 > 16 TxHeader c Content-Length: 2623 > 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT > 16 TxHeader c X-Varnish: 403520520 > 16 TxHeader c Age: 2 > 16 TxHeader c Via: 1.1 varnish > 16 TxHeader c Connection: close > 16 Length c 2623 > 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 > 16 SessionClose c error > 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 > > First attempt (redcafe1 backend) > > 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 > 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 > 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c > 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s > 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 > 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a > 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf > 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 > 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke > 0x04b0: 6e n > 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 > 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q > 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 > 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, > 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. > 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... > 0x0030: 0c9d 9863 ...c > > > Second attempt (redcafe2 backend) > > 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 > 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg > 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W > 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... > 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 > 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg > 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W > 0x0030: 1019 bc2d ...- > 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 > 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ > 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 > 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ > 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From mrbits.dcf at gmail.com Fri Mar 25 11:38:51 2011 From: mrbits.dcf at gmail.com (MrBiTs) Date: Fri, 25 Mar 2011 07:38:51 -0300 Subject: Using cron to purge cache Message-ID: <4D8C70BB.8010408@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/25/2011 06:54 , Per Buer wrote: > Hi Nuno. > > On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: >> Hello, >> >> I have a file named varnish-purge with this content that it's executed daily >> by cron, but the objects remain in the cache, even when I run it manually. >> -------------------------------------------------------------------------------------------- >> #!/bin/sh >> >> /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" > > url.purge will create what we call a "ban", or a filter. Think of it > as a lazy purge. The objects will remain in memory but killed during > lookup. If you want to kill the objects from cache you'd have to set > up the ban lurker to walk the objects and expunge them. > > If you want the objects to actually disappear from memory right away > you would have to do a HTTP PURGE call, and setting the TTL to zero, > but that means you'd have to kill off every URL in cache. > I think we can do a nice discussion here. First, and this is a big off-topic here, if I need to purge all contents from time to time, it's better to create a huge webserver structure, to support requests, change the application a little to generate static pages from time to time to not increase the database load and forget about Varnish. But this is discussion to another list, another time. Second, is this recommended ? I mean, purge all URL, all contents in cache will do varnish to request this content again to backend, increasing server load and it can cause problems. What to you guys think about it ? I think it is better to have a purge system (like a message queue or a form to kill some objetcs) to remove only really wanted objects. If you need to purge all varnish contents, why not just restart varnish from time to time ? But, again, all backend issues must be considerated here. - -- LLAP .0. MrBiTs - mrbits.dcf at gmail.com ..0 GnuPG - http://keyserver.fug.com.br:11371/pks/lookup?op=get&search=0x6EC818FC2B3CA5AB 000 http://www.mrbits.com.br -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) iQEcBAEBCAAGBQJNjHC7AAoJEG7IGPwrPKWr3kkH/1zim9haorjg4qbrLeefsyjd chBzbCdNwNUPqjbKW+V0hyw7OZY80boMCfD7ZIWgWd+Dy5kCou01D7qebRGYGHt9 oaSmgNFXISMUwOtZwl4F5uKsKhxH7ZtBdJncomoSz3+Apl9yY3gB0aYYfNoi8YoS btgWsNKBzWQTR2pFUz8dYqumrr0aQU3sQRhqBQ7YU165GnhzBSAOxQuTXwM5Lp+j IPLwfWuPaPdSt5nhueDrovdQqHGctWDjkB2JGpi0M8ALvPHETKIZA5oBMHXuXhXY uURPvOsLm2bFmhzDYG3Zr0sJ81ek4K7T2LXd4yT9uiqisnyd5WjbfTH6XS4keDY= =x2+0 -----END PGP SIGNATURE----- From contact at jpluscplusm.com Fri Mar 25 11:55:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 25 Mar 2011 10:55:14 +0000 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: On 21 March 2011 15:08, Jonathan Matthews wrote: > Hi all - > > I've got some long-running squid instances, mainly used for caching > medium-sized binaries, which I'd like to replace with some varnish > instances. ?The binaries are quite heavy to regenerate on the distant > origin servers and there's a large number of them. ?Hence, I'd like to > use the squid cache as a target to warm a (new, nearby) varnish > instance instead of just pointing the varnish instance at the remote > origin servers. > > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. ?I've looked around for people trying the > same thing, but haven't come across any success stories. ?I'm > perfectly prepared to be told that I simply have to reconfigure the > squid instances in mixed proxy/origin-server mode, and that there's no > way around it, but I thought I'd ask the list for guidance first ... > > Any thoughts? Anyone? All opinions welcome ... :-) -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From lampe at hauke-lampe.de Sat Mar 26 18:05:03 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sat, 26 Mar 2011 18:05:03 +0100 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D8E1CBF.6060802@hauke-lampe.de> On 25.03.2011 11:55, Jonathan Matthews wrote: > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. Do they really? I would think squid just pipes a CONNECT request wihout caching the contents, just like varnish does. I'm not quite sure about that, though. What I *think* you need to do is to rewrite the request URL so that it contains the hostname. An incoming request like this. | GET /foo | Host: example.com should be passed to squid in this form: | GET http://example.com/foo In VCL: set req.backend = squid; if (req.url ~ "^/" && req.http.Host) { set req.url = "http://" req.http.Host req.url; unset req.http.Host; } Hauke. From iliakan at gmail.com Sat Mar 26 21:12:48 2011 From: iliakan at gmail.com (Ilia Kantor) Date: Sat, 26 Mar 2011 23:12:48 +0300 Subject: Current sessions count Message-ID: Hello, How can I get a count of current Varnish sessions from inside VCL? Inline C will do. Approximate will do. I need it to enable DDOS protections if the count of current connects exceeds given constant. Maybe there is a VSL_stats field? -- --- Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Mon Mar 28 12:52:13 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Mon, 28 Mar 2011 12:52:13 +0200 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: <20110328105213.GA9172@localhost.localdomain> On Mon, Mar 21, 2011 at 10:39:50PM -0400, AD wrote: > On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > > > Per lots of posts on this list, obj is now baresp in newer Varnish > > versions. It sounds like the documentation for this change hasn't been > > fully propagated. Small clarification (which should go into the docs, somewhere): obj.* still exists. beresp is the backend response which you can modify in vcl_fetch. Upon exiting vcl_fetch, beresp is used to allocate space for and populate the obj-structure. The only part of obj.* that is available in vcl_deliver is obj.hits. What you can do is store the ttl on req.http somewhere (assuming the conversions work) in vcl_hit, then copy it onto resp.* in vcl_deliver. - Kristian From johnson at nmr.mgh.harvard.edu Mon Mar 28 13:23:41 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 07:23:41 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 Message-ID: Hi, Currently running 2.0.5. Has been working so well as a rule we just forgot about it. Would like to update to 2.1.5 because 2.0.5 hung up last week. I saw mention of hang bug in 2.0.5 but this is the first time we've felt it. I made a change to the config a while back to prevent double caching on a server name alternate name. Question, this is a plug n' play, yes? I can just install the new RPM and it will take off were it was stopped? No config differences that are applicable? That would be bloody awesome. If the is anything that will cause I problem I'd like to know about it before the update. Want the server down as short a time as possible. Tnx. ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |"Life is chaos. Chaos is life. Control is an Charlestown, MA., 02129 USA | illusion." Trance Gemini ------------------------------------------------------------------------------- The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. From s.welschhoff at lvm.de Mon Mar 28 13:24:36 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Mon, 28 Mar 2011 13:24:36 +0200 Subject: localhost Message-ID: Hello, I am very new in varnish. I try to get a return code 200 when varnish opens the default backend. The default backend will be localhost. Is it possible? Thanks for your help. Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From samcrawford at gmail.com Mon Mar 28 17:06:35 2011 From: samcrawford at gmail.com (Sam Crawford) Date: Mon, 28 Mar 2011 16:06:35 +0100 Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: It'd probably be wise to spin up a 2.1.5 instance of Varnish on a development server using your production VCL. If it parses it okay and starts, then you should be fine. The only change that may catch you out is that obj.* changed to beresp.* from 2.0.x to 2.1.x. Thanks, Sam On 28 March 2011 12:23, Chris Johnson wrote: > ? ? Hi, > > ? ? Currently running 2.0.5. ?Has been working so well as a rule we > just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung > up last week. ?I saw mention of hang bug in 2.0.5 but this is the first > time we've felt it. > > ? ? I made a change to the config a while back to prevent double > caching on a server name alternate name. ?Question, this is a plug n' play, > yes? ?I can just install the new RPM and it will take off were it was > stopped? ?No config differences that are applicable? ?That would be bloody > awesome. ?If the is anything that will cause I problem I'd like > to know about it before the update. ?Want the server down as short a > time as possible. ?Tnx. > > ------------------------------------------------------------------------------- > Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu > Systems Administrator ? ? ? |Web: > ?http://www.nmr.mgh.harvard.edu/~johnson > NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 > Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 > 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an > Charlestown, MA., 02129 USA | illusion." ?Trance Gemini > ------------------------------------------------------------------------------- > > > The information in this e-mail is intended only for the person to whom it is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From mhettwer at team.mobile.de Mon Mar 28 17:24:51 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Mon, 28 Mar 2011 16:24:51 +0100 Subject: localhost In-Reply-To: Message-ID: On 28.03.11 13:24, "Stefan Welschhoff" wrote: >Hello, Hi there, > >I am very new in varnish. I try to get a return code 200 when varnish >opens the default backend. The default backend will be localhost. Is it >possible? Short answer: Yes, if your backend behaves well. Little longer answer: If you configure a backend like that: backend default { .host = "127.0.0.1"; .port = "8080"; } And assuming that your backend really listens on localhost:8080, then use the following in vcl_recv: sub vcl_recv { set req.backend = default; } Now start varnish, and assuming you let varnish listen on localhost:80 you can do something like that wget -0 /dev/null -q -S http://localhost/foo.txt The request GET /foo.txt goes to varnish and he forwards this to your backend at localhost:8080. If "wget -0 /dev/null -q -S http://localhost:8080/foo.txt" works, then "wget -0 /dev/null -q -S http://localhost/foo.txt" will work too. Cheers, Marian PS.: Start with the fine documentation of varnish! From johnson at nmr.mgh.harvard.edu Mon Mar 28 17:54:23 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 11:54:23 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: Well my config has the following in vcl fetch sub vcl_fetch { if (!obj.cacheable) { return (pass); } if (req.url ~ "^/fswiki") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (req.url ~ "^/wiki/fswiki_htdocs") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (obj.http.Set-Cookie) { return (pass); } set obj.prefetch = -30s; return (deliver); } But it's isolated. Presumably the 2.1.5 has its own. > It'd probably be wise to spin up a 2.1.5 instance of Varnish on a > development server using your production VCL. If it parses it okay and > starts, then you should be fine. > > The only change that may catch you out is that obj.* changed to > beresp.* from 2.0.x to 2.1.x. > > Thanks, > > Sam > > > On 28 March 2011 12:23, Chris Johnson wrote: >> ? ? Hi, >> >> ? ? Currently running 2.0.5. ?Has been working so well as a rule we >> just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung >> up last week. ?I saw mention of hang bug in 2.0.5 but this is the first >> time we've felt it. >> >> ? ? I made a change to the config a while back to prevent double >> caching on a server name alternate name. ?Question, this is a plug n' play, >> yes? ?I can just install the new RPM and it will take off were it was >> stopped? ?No config differences that are applicable? ?That would be bloody >> awesome. ?If the is anything that will cause I problem I'd like >> to know about it before the update. ?Want the server down as short a >> time as possible. ?Tnx. >> >> ------------------------------------------------------------------------------- >> Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu >> Systems Administrator ? ? ? |Web: >> ?http://www.nmr.mgh.harvard.edu/~johnson >> NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 >> Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 >> 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an >> Charlestown, MA., 02129 USA | illusion." ?Trance Gemini >> ------------------------------------------------------------------------------- >> >> >> The information in this e-mail is intended only for the person to whom it is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you in >> error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |I know a place where dreams are born and time Charlestown, MA., 02129 USA | is never planned. Neverland ------------------------------------------------------------------------------- From info at songteksten.nl Tue Mar 29 14:56:15 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 14:56:15 +0200 Subject: Mobile redirect question Message-ID: <1301403375.2060.20.camel@maikel-laptop> Hi, I'm using currently the following code to do a mobile site redirect. I found it somewhere on the internet. if ( req.http.user-agent ~ "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" && req.http.host ~ "(www.example.nl|www.example.be)" ) { set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", "http://m.\2"); error 750 req.http.newhost; The redirect from www.example.nl to m.example.nl works perfectly, only www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou the page.php?id=1 part). Is it possible to change the redirect so it also includes the rest of the url? Thanks, Maikel From bjorn at ruberg.no Tue Mar 29 15:04:33 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 29 Mar 2011 15:04:33 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91D8E1.9020905@ruberg.no> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? You don't show how error 750 is handled in your VCL, so it's a bit hard to tell how to improve your current config. However, the following URL should get you going: http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping -- Bj?rn From info at songteksten.nl Tue Mar 29 15:13:00 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 15:13:00 +0200 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <1301404380.2060.21.camel@maikel-laptop> The redirect looks like this: sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 302; return(deliver); } } Maikel On Tue, 2011-03-29 at 15:04 +0200, Bj?rn Ruberg wrote: > On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > > Hi, > > > > I'm using currently the following code to do a mobile site redirect. I > > found it somewhere on the internet. > > > > if ( req.http.user-agent ~ > > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > > > && req.http.host ~ "(www.example.nl|www.example.be)" > > > > ) { > > > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > > "http://m.\2"); > > error 750 req.http.newhost; > > > > The redirect from www.example.nl to m.example.nl works perfectly, only > > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > > the page.php?id=1 part). > > > > Is it possible to change the redirect so it also includes the rest of > > the url? > > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping > From bjorn at ruberg.no Tue Mar 29 15:16:17 2011 From: bjorn at ruberg.no (=?UTF-8?B?QmrDuHJuIFJ1YmVyZw==?=) Date: Tue, 29 Mar 2011 15:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301404380.2060.21.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> <1301404380.2060.21.camel@maikel-laptop> Message-ID: <4D91DBA1.9060108@ruberg.no> On 29. mars 2011 15:13, Maikel - Songteksten.nl wrote: > The redirect looks like this: > > sub vcl_error { > if (obj.status == 750) { > set obj.http.Location = obj.response; > set obj.status = 302; > return(deliver); > } > } You should still take a look at the URL I mentioned. And please don't top-post. -- Bj?rn From scaunter at topscms.com Tue Mar 29 15:53:41 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 29 Mar 2011 09:53:41 -0400 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C011D20CB@TMG-EVS02.torstar.net> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*ht c.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia. *|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*Son yEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcat el.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoM o.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^K WC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*| ^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.* |^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Win dows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping set req.http.newhost = regsub(req.url, "^/(.*)", "http://m.example.ca/\1"); Stef From geoff at uplex.de Tue Mar 29 16:16:17 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 29 Mar 2011 16:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91E9B1.9080407@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 02:56 PM, Maikel - Songteksten.nl wrote: > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; This is not what you asked about, but you're almost certainly losing a lot of performance with that regex. I would suggest that you put the check against req.http.host first (so that it doesn't bother with the pattern match when it doesn't have to), and above all, get rid of the leading and trailing .*'s in the regex. When you match a string against a regex like ".*foobar.*", it first matches the leading .* all the way until the end of the input string, overlooking any instance of "foobar" it sees along the way. Then it starts backtracking until it can match ".*f", then ".*fo", and so on. If it can match ".*foobar", it then takes the trouble to match the trailing .* to the end of the string. This is happening for all of the alternates in your regex until a match is found. phk's advice at VUG was: write your regex so that you can prove as quickly as possible that an input string *doesn't* match it. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkemwAAoJEOUwvh9pJNURLpEP/3t9+FTbTpzXmx9cts+2NOcr VJl4L+bi4+b+Zkn46yMgZjwLOyRWhqYQBfFozqKVOIX204jH0kzuHFqWwkF3luNO B6izenicK6jhQurdUsS4CTJ6j74yCgX1Jks9DC4Z3pLcwwY/swzJsV2ldKx9rqWJ sr6NJv8WxSz1Pb/i5BP6C7veplmO/rdKLZxzll5b7Qic6LicrRGG5ny0exUdysce q2ZlcAXCe7//7Ha7+1wlw5xXb3APcx96SB4bh+ASS63KgHevKkSwPOFFUdv//FzG xLEc/U5MqKjiFErx0IPzPZrD+E2Yf0PIVqRc9L7eL9g5SSJEfqwmFCrHucLYpmpW tpdDepflnUv1p7IkY0boNabds8AhRPAIAtYi6o8+mjGQBtGVdOuQ4SbH2+2OOMLz x3YtAcjUjhArg8gUSjZRPIXfbHHy6vSiYKBPBqJUPmUBRw009VsCNO1F58b1sXJb YVmX6cKwfcq97GFqBBp+CsKEyJsJaubIReXQOoJTRrPVHqqn4aWmYOk1UHQiN5Pw iXNFJQbV/bh0jrgk5W5bcOS+WyvwSQm0aK8SMsHnVY4gh73md6kcD1rybc3S5doC +WEBLMdJWteDOZMQDBVgXXUmwmzHk8eX+6cRQKe4IaXXgRSoGOAZiwy+6G7a3YYk klz7Nm1RM3vs6EmQfvoY =kwSJ -----END PGP SIGNATURE----- From listas at kurtkraut.net Tue Mar 29 22:09:31 2011 From: listas at kurtkraut.net (Kurt Kraut) Date: Tue, 29 Mar 2011 17:09:31 -0300 Subject: How to collect lines from varnishncsa only from a specific domain? Message-ID: Hi, I'm trying to use varnishncsa -c -I to collect the output of varnishncsa concerning a specific domain (e.g.: domain.com). I've attempted the following commands: varnishncsa -c -I *domain.com* varnishncsa -c -I /*domain.com/ And none of them worked. But the following command works: varnishncsa -c | grep domain.com I feel there is something odd with the varnishncsa -I command. How does this work? Thanks in advance, Kurt Kraut -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 04:43:43 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 29 Mar 2011 23:43:43 -0300 Subject: varnish as traffic director Message-ID: hi! is there any way to use varnish to direct my traffic to different backends depending on the requested url? so for example I would have 2 different backends: - search-backend - items-backend if the requested url is /search I want to direct the traffic to search-backend and if the requested url is /items I want to direct the traffic to items-backend is this a common use case for varnish or I am trying to accomplish something that should be done using something else? thanks! jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Wed Mar 30 04:52:35 2011 From: straightflush at gmail.com (AD) Date: Tue, 29 Mar 2011 22:52:35 -0400 Subject: varnish as traffic director In-Reply-To: References: Message-ID: sub vcl_recv { if (req.url ~ "^/search") { set req.backend = search-backend; } elseif (req.url ~ "^/items") { set req.backend = items-backend; } } On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > hi! is there any way to use varnish to direct my traffic to different > backends depending on the requested url? > so for example I would have 2 different backends: > - search-backend > - items-backend > > if the requested url is /search I want to direct the traffic to > search-backend > and if the requested url is /items I want to direct the traffic to > items-backend > > is this a common use case for varnish or I am trying to accomplish > something that should be done using something else? > > thanks! > > jonathan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 05:01:13 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Wed, 30 Mar 2011 00:01:13 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! If I have 100 of different rules, I would have a very big if block, right? Is this a common use case for varnish? On Tue, Mar 29, 2011 at 11:52 PM, AD wrote: > sub vcl_recv { > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > >> hi! is there any way to use varnish to direct my traffic to different >> backends depending on the requested url? >> so for example I would have 2 different backends: >> - search-backend >> - items-backend >> >> if the requested url is /search I want to direct the traffic to >> search-backend >> and if the requested url is /items I want to direct the traffic to >> items-backend >> >> is this a common use case for varnish or I am trying to accomplish >> something that should be done using something else? >> >> thanks! >> >> jonathan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 30 08:32:12 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 30 Mar 2011 08:32:12 +0200 Subject: 2.0.5 -> 2.1.5 In-Reply-To: (Sam Crawford's message of "Mon, 28 Mar 2011 16:06:35 +0100") References: Message-ID: <87r59pt8yb.fsf@qurzaw.varnish-software.com> ]] Sam Crawford | The only change that may catch you out is that obj.* changed to | beresp.* from 2.0.x to 2.1.x. Regexes also changed from case-insensitive to case-sensitive and we switched from POSIX regexes to PCRE, which might be important as well. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From martin.boer at bizztravel.nl Tue Mar 29 11:56:48 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 29 Mar 2011 11:56:48 +0200 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D91ACE0.7020008@bizztravel.nl> Hi Jonathan, What you could do is something like; backend squid_1 { ... } backend backend_1 { ... } director prefer_squid random { .retries = 1; { .backend = squid_1 .weight = 250; } { .backend = backend_1; .weight = 1; } } This will make sure varnish will retrieve data from the squids mostly and gives you the chance to do the migration in your own time. Regards, Martin On 03/25/2011 11:55 AM, Jonathan Matthews wrote: > On 21 March 2011 15:08, Jonathan Matthews wrote: >> Hi all - >> >> I've got some long-running squid instances, mainly used for caching >> medium-sized binaries, which I'd like to replace with some varnish >> instances. The binaries are quite heavy to regenerate on the distant >> origin servers and there's a large number of them. Hence, I'd like to >> use the squid cache as a target to warm a (new, nearby) varnish >> instance instead of just pointing the varnish instance at the remote >> origin servers. >> >> The squid instances are running in proxy mode, and require (I >> *believe*) an HTTP CONNECT. I've looked around for people trying the >> same thing, but haven't come across any success stories. I'm >> perfectly prepared to be told that I simply have to reconfigure the >> squid instances in mixed proxy/origin-server mode, and that there's no >> way around it, but I thought I'd ask the list for guidance first ... >> >> Any thoughts? > Anyone? All opinions welcome ... :-) > From ronny.ostman at apberget.se Wed Mar 30 08:35:10 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 08:35:10 +0200 Subject: Using varnish to cache remote content Message-ID: Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 30 08:52:03 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 29 Mar 2011 23:52:03 -0700 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Yes, what you describe is impossible. Since all of these requests are handled on the client/browser side, you can't effect them. The only way would be to either A) configure the user to proxy through your varnish for specific domains (ugly), or B) filter the user's DNS and replace flickr.com etc with the IP of your varnish cache (even uglier). Neither is possible with general internet traffic. Not a Varnish thing, but in theory you could modify your backends to rewrite external URLs that they emit as http://your_varnish.com/flickr.com/real_file(instead of http://flickr.com/real_file) and then have Varnish perform cache magick on that rewritten URL. But start talking SSL and it all goes sideways. And this assumes you wanted only to proxy external URLs that your site is emitting. If there's a glimmer of possibility, it's a really ugly glimmer. ;-) -- kb On Tue, Mar 29, 2011 at 23:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really > found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I > want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not > sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" > requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own > backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go > through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 30 08:51:43 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 08:51:43 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: You cannot do that with varnish, or with anything else :) Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronny ?stman Sent: Wednesday, March 30, 2011 8:35 AM To: varnish-misc at varnish-cache.org Subject: Using varnish to cache remote content Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronny.ostman at apberget.se Wed Mar 30 09:16:06 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 09:16:06 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Thank's for your answers! :) I figured it was more or less impossible. > >If there's a glimmer of possibility, it's a really ugly glimmer. ;-) Luckily ugly workarounds is my speciality! ;) > > But I guess a pretty resonable solution for caching all the content I want from the domains that I control on the same varnish installation is to point e.g. www.mydomain and static.mydomain to my varnish server and "route" traffic using several backends? Regards, Ronny > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 30 09:23:13 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 30 Mar 2011 09:23:13 +0200 Subject: varnish as traffic director In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky wrote: > Thanks! > If I have 100 of different rules, I would have a very big if block, right? > Is this a common use case for varnish? Yes. It's quite common to have a lot of logic. Don't worry about it, VCL is executed at light speed. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From geoff at uplex.de Wed Mar 30 10:06:06 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 30 Mar 2011 10:06:06 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: References: Message-ID: <4D92E46E.8010101@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 10:09 PM, Kurt Kraut wrote: > > I'm trying to use varnishncsa -c -I to collect the output of varnishncsa > concerning a specific domain (e.g.: domain.com). I've attempted the > following commands: > > varnishncsa -c -I *domain.com* > varnishncsa -c -I /*domain.com/ > > And none of them worked. But the following command works: > > varnishncsa -c | grep domain.com The -I flag for varnishncsa and other tools does regex matching, not globbing with the '*' wildcard. But if it's enough for you to just match the string domain.com, you don't need anything else: varnishncsa -c -I domain.com Just like for grep. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkuRuAAoJEOUwvh9pJNURuPYQAI51SjWQXEzicJtgjpMW5R7c rk88lflgWqCyo3390qvd3eA2YAW7JIPSsOcFhLwFSb1/OxLHqmn5lIy2y/gTiJbV kd9yoVMomwWyH0vAr3F1L3iW7HwTLMoz/F9nXNBYRYhbAUaAs9ESrEIiPqD3SYP8 Z1Py1DwtEhiVfJ8X3yYEfVEef9B60Zn1Y3czrQ75m+i9mvljMWxCa2kL/IgKVTe+ MlwQA4wPni+qxTJoC5wwZNLh6FHRtl2F6OQUJrm0bBjt97tw8Ul+1DLFUjHCY6vl lPTXQFflqDwaBo4kiPHRgpKHvmFpcwNZokYeZ9bQgB8ds+fJCx4DBI/t40pUCRiB gJT5AKfFCiFyu/HdC4vGMqXrt22wn9yriHUhTI8qnbHRj939wFkoBix3XrjjVKSW 4Ma1kaET3tTJILtz4xhAVhQPOb4HEuoY5otcTrUS+Ix5aEQwsjFsEVkzS7mh8RLc OtNe8t0JEmLm7SdgFSit7RO/i0dPRyL4ih8duB1PIKeJxys8nSIQvODzban7k9Oa rrQVsplPLmHjngUBoDxNkyc1yo7s6OjsVO7seVxjvgOSoWdmgnOEG3oVvF9uZ2Jd tFPl7gfBM6eHR8owLgUscuaQGVbRr00om5Y3RSX/MGqPZwQR8Dy/X6YJI0DIAZfQ Mj2/uaZ7Uw4lHJFRxhkL =Lbj3 -----END PGP SIGNATURE----- From diego.roccia at subito.it Wed Mar 30 10:51:24 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 10:51:24 +0200 Subject: Varnish stuck on most served content Message-ID: <4D92EF0C.40204@subito.it> Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: stats.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: default.vcl URL: From traian.bratucu at eea.europa.eu Wed Mar 30 10:59:08 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 10:59:08 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. Traian -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia Sent: Wednesday, March 30, 2011 10:51 AM To: varnish-misc at varnish-cache.org Subject: Varnish stuck on most served content Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego From diego.roccia at subito.it Wed Mar 30 11:10:35 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 11:10:35 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: <4D92EF0C.40204@subito.it> Message-ID: <4D92F38B.6090900@subito.it> Hi Traian, Thanks for your interest. The problem is that it's a random issue. I noticed it as I'm using some commercial tools (keynote and gomez) to monitor website performances and I notice some out of average point in the scatter time graph. Experiencing it locally is really hard. On 03/30/2011 10:59 AM, Traian Bratucu wrote: > Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". > You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. > > Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. > > Traian > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia > Sent: Wednesday, March 30, 2011 10:51 AM > To: varnish-misc at varnish-cache.org > Subject: Varnish stuck on most served content > > Hi Guys, > This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: > > +---------+ +---------+ +---------+ +---------+ > | VARNISH | | VARNISH | | VARNISH | | VARNISH | > +---------+ +---------+ +---------+ +---------+ > | | | | > +------------+------------+------------+ > | | > +------+-+ +--+-----+ > | APACHE | | APACHE | > +--------+ +--------+ > > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. > > It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. > > It's really weird, because css should be always cached. > > I'm running Centos 5.5 64bit and here's my varnish startup parameters: > > DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T 0.0.0.0:6082 \ > -t 604800 \ > -u varnish -g varnish \ > -s malloc,54G \ > -p thread_pool_add_delay=2 \ > -p thread_pools=16 \ > -p thread_pool_min=50 \ > -p thread_pool_max=4000 \ > -p listen_depth=4096 \ > -p lru_interval=600 \ > -hclassic,500009 \ > -p log_hashstring=off \ > -p shm_workspace=16384 \ > -p ping_interval=2 \ > -p default_grace=3600 \ > -p pipe_timeout=10 \ > -p sess_timeout=6 \ > -p send_timeout=10" > > In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? > > In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. > > Hope someone can help me > Thanks in advance > Diego > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From pom at dmsp.de Wed Mar 30 11:59:03 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Wed, 30 Mar 2011 11:59:03 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: <4D92E46E.8010101@uplex.de> References: <4D92E46E.8010101@uplex.de> Message-ID: <4D92FEE7.2030305@dmsp.de> Am 30.03.2011 10:06, schrieb Geoff Simmons: > On 03/29/11 10:09 PM, Kurt Kraut wrote: >> I'm trying to use varnishncsa -c -I to collect the output of varnishncsa >> concerning a specific domain (e.g.: domain.com). I've attempted the >> following commands: >> >> varnishncsa -c -I *domain.com* >> varnishncsa -c -I /*domain.com/ >> >> And none of them worked. But the following command works: >> >> varnishncsa -c | grep domain.com > The -I flag for varnishncsa and other tools does regex matching, not > globbing with the '*' wildcard. > > But if it's enough for you to just match the string domain.com, you > don't need anything else: > > varnishncsa -c -I domain.com > > Just like for grep. > > > Best, > Geoff I have to support Kurt ;-) Have the same problem (still moved it to the stack of 'unsolved stuff' so far...) varnishncsa -I is either not or at least working strange... Example: aurora ~ # varnishncsa -c XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:16 +0200] "GET http://www.annuna.net/ HTTP/1.0" 200 791 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/StyleSheet.css HTTP/1.0" 200 2876 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/img/Logo_Annuna_850x680_weiss.jpg HTTP/1.0" 200 31986 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" aurora ~ # varnishncsa -c -I annuna or aurora ~ # varnishncsa -c -I "^.*annuna.*$" From my understanding it should at least match any line containing the character string "annuna"? But it doesn't... Am I doint it wrong? ^^ Wondering... Stefan -- Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching http://www.dmsp.de From stewsnooze at gmail.com Wed Mar 30 08:40:49 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Wed, 30 Mar 2011 07:40:49 +0100 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: <191AFFC3-B260-40FB-9AD7-CABBBF9F4E8B@gmail.com> Hi, You can only cache items where the DNS record for those sites points at the server/infrastructure where you are running Varnish. You could do something crazy like have flickr.mydomain.com referenced in your HTML pages which is configured in Varnish to use flickr.com as a backend. Personally I think this is a bit strange but it is possible. You need to think about why you are caching external stuff in Varnish and whether you are allowed to? Stew On 30 Mar 2011, at 07:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampe at hauke-lampe.de Wed Mar 30 18:27:00 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 30 Mar 2011 18:27:00 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: <4D9359D4.6080102@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30.03.2011 10:51, Diego Roccia wrote: > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, > running varnish 2.1.6 (Updated from 2.0.5 1 month ago). varnish 2.1.6 hasn't been released, yet, AFAIK. > It's happening often that they freeze serving contents. and I noticed a > common pattern: the content that get stuck is always one of the most > served, like a css or js file, or some component of the page layout, Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does it eventually deliver the content after the same period of time? There was a bug in 2.1.4 that could lead to the symptoms you describe. If the client sent an If-Modified-Since: header and the backend returned a 304 response, varnish would wait on the backend connection until "first_byte_timeout" elapsed. In that case, the following VCL code helps: sub vcl_pass { unset bereq.http.if-modified-since; unset bereq.http.if-none-match; } http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl See also this thread: http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 =BTfX -----END PGP SIGNATURE----- From diego.roccia at subito.it Thu Mar 31 10:33:02 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Thu, 31 Mar 2011 10:33:02 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D9359D4.6080102@hauke-lampe.de> References: <4D92EF0C.40204@subito.it> <4D9359D4.6080102@hauke-lampe.de> Message-ID: <4D943C3E.5070903@subito.it> On 03/30/2011 06:27 PM, Hauke Lampe wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > On 30.03.2011 10:51, Diego Roccia wrote: > >> Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, >> running varnish 2.1.6 (Updated from 2.0.5 1 month ago). > > varnish 2.1.6 hasn't been released, yet, AFAIK. Sorry, I meant varnish-2.1.5 (SVN 0843d7a), the version from official rpm repository > >> It's happening often that they freeze serving contents. and I noticed a >> common pattern: the content that get stuck is always one of the most >> served, like a css or js file, or some component of the page layout, > > Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does > it eventually deliver the content after the same period of time? Doesn't seems to be a constant time. the same varnish provides tens of elements per page, and sometimes it gets stuck on one of them. It's always a css or js. There are no rules in the vcl specific to these kind of files. so the only common pattern I see is that they're the files it has to serve most times. > There was a bug in 2.1.4 that could lead to the symptoms you describe. > If the client sent an If-Modified-Since: header and the backend returned > a 304 response, varnish would wait on the backend connection until > "first_byte_timeout" elapsed. > I don't think it's receiving the If-Modified-Since , as we're talking about website monitoring tools, and they are configured to start cache cleared every time. > In that case, the following VCL code helps: > > sub vcl_pass { > unset bereq.http.if-modified-since; > unset bereq.http.if-none-match; > } > http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl > > See also this thread: > http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 > > > > Hauke. > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP > 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 > =BTfX > -----END PGP SIGNATURE----- From mhettwer at team.mobile.de Thu Mar 31 11:15:06 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 10:15:06 +0100 Subject: Varnish stuck on most served content In-Reply-To: <4D92F38B.6090900@subito.it> Message-ID: Hi Diego, Please try to avoid top posting. On 30.03.11 11:10, "Diego Roccia" wrote: >Hi Traian, > Thanks for your interest. The problem is that it's a random issue. I >noticed it as I'm using some commercial tools (keynote and gomez) to >monitor website performances and I notice some out of average point in >the scatter time graph. Experiencing it locally is really hard. We are using gomez to let them monitor some of our important pages from remote. What we usually do if we see spikes is, to dig in and find out were the time is spent. In your example, if it's gomez, click in and check. Is it first byte time? DNS time? Content delivery time? With regards to how to debug that. I second the question to the list. My usual procedure in a setup of Apache-->Tomcat-->SomeBackends, I'll go and dig into the access logs of all components and try to figure out who is spending the time to deliver. However, with varnish in front of apaches, I usually don't have a logfile which tells me "varnish thinks it spend xx ms to deliver this request". I know that theres varnishncsa, but I dunno whether it logs away the processing time of a request (%D in Apache LogFormat IIRC). You might try and really use varnishlog to log away requests to js and css files. However, you might grow some really huge files there... Hard to parse them ;) >> >> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >> >> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >> -f ${VARNISH_VCL_CONF} \ >> -T 0.0.0.0:6082 \ >> -t 604800 \ >> -u varnish -g varnish \ >> -s malloc,54G \ >> -p thread_pool_add_delay=2 \ >> -p thread_pools=16 \ >> -p thread_pool_min=50 \ >> -p thread_pool_max=4000 \ >> -p listen_depth=4096 \ >> -p lru_interval=600 \ >> -hclassic,500009 \ >> -p log_hashstring=off \ >> -p shm_workspace=16384 \ >> -p ping_interval=2 \ >> -p default_grace=3600 \ >> -p pipe_timeout=10 \ >> -p sess_timeout=6 \ >> -p send_timeout=10" Hu. What are all those "-p" parameters? Looks like some heavy tweaking to me. Perhaps some varnish gurus might shime in, but to me tuning like that sounds like trouble. Unless you really know what you did there. I wouldn't (not without the documentation at hands). Cheers, Marian From geoff at uplex.de Thu Mar 31 11:40:24 2011 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 31 Mar 2011 11:40:24 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: Message-ID: <4D944C08.7040804@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/31/11 11:15 AM, Hettwer, Marian wrote: >>> >>> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >>> >>> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>> -f ${VARNISH_VCL_CONF} \ >>> -T 0.0.0.0:6082 \ >>> -t 604800 \ >>> -u varnish -g varnish \ >>> -s malloc,54G \ >>> -p thread_pool_add_delay=2 \ >>> -p thread_pools=16 \ >>> -p thread_pool_min=50 \ >>> -p thread_pool_max=4000 \ >>> -p listen_depth=4096 \ >>> -p lru_interval=600 \ >>> -hclassic,500009 \ >>> -p log_hashstring=off \ >>> -p shm_workspace=16384 \ >>> -p ping_interval=2 \ >>> -p default_grace=3600 \ >>> -p pipe_timeout=10 \ >>> -p sess_timeout=6 \ >>> -p send_timeout=10" > > Hu. What are all those "-p" parameters? Looks like some heavy tweaking to > me. > Perhaps some varnish gurus might shime in, but to me tuning like that > sounds like trouble. > Unless you really know what you did there. > > I wouldn't (not without the documentation at hands). Um. Many of those -p's are roughly in the ranges recommended on the Wiki performance page, and on Kristian Lyngstol's blog. http://www.varnish-cache.org/trac/wiki/Performance http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/ Perhaps one of the settings is causing a problem, but it isn't wrong to be doing it all -- and it's quite necessary on a high-traffic site. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNlEwIAAoJEOUwvh9pJNUR0tYP/2M9LpET5mj3OQiMu2Bym1JD iTn2eckasyQRwPzXvDhCZNFRHJDV8aO2wUZ3XqMFsty05FKgPUhoLgJZ9wAoaBXZ oVVr34G4b33SFlVAxfvskHrEp83F0cY5Gb6W/JP2Oj/SzpHEM3elT+8tTFXjgngB F463EiGcikdSdQ5PaMGfTva9JZP6QI+K0IYW4walCPSsz829yQ6I6e5eIDCECiFq BhJMcXdvATWOHg5LfcRUOlcQJJFPl0mzT/Y2zq/hgdImjZ5NLU87xhjFD8twKOVZ Rju8u2Cz6Pl9HHNyVTV5W2fNmIE3J1o972JseHz4wFNoEJQtzTtyEGADE2u2bXH9 Blbor4J1bmERUSyFvH9Brhe1+4Rs5IOtGFCGrEzpxtY+QiqCIkdxmCCl5/EhQlRl eJZMkN3eaXvGgrHHASxM7e2UoIFm0XrBJW5N01Bu6dA/EH6jLowwEmU6OeLkKUSF DLIgAeKt1ECrVU23b9zFfiZSQwMTKB7KumrJoeDrUtSuWVIWdz83thaD0MI6ucxD 62CIPkR7W5zDxSDQ0A/AnXrkZpe8sLP9Z/DgcHA8rSX39zqxJae44OnU56fU07zc 440P+GeT6j5MoKAa1gCxSDAVr7MnDB3B82Y8fZaUFWB1rT1tI/B7VB5dhVwFoEi2 ucD3QwucEs7bpLrKyiwo =3vMe -----END PGP SIGNATURE----- From dan at retrobadger.net Thu Mar 31 13:39:51 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 12:39:51 +0100 Subject: Learning C and VRT_SetHdr() Message-ID: <4D946807.8010900@retrobadger.net> I would like to do some more advanced functionality within our VCL file, and to do so need to use the inline-c capabilities of varnish. So, to start off with, I thought I would get my VCL file to set the headers, so I can test variables, and be sure it is working. But am getting a WSOD when I impliment my seemingly simple code. So my main questions are: * Are there any good docs on the VRT variables * Are there examples/tutorials on using C within the VCL (for beginners to the subject) * Is there something obvious I have overlooked when writing my code The snipper from my current code is: /sub detectmobile {/ / C{/ / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end);/ / }C/ /}/ /sub vcl_miss {/ / call detectmobile;/ / return(fetch);/ /}/ /sub vcl_pipe {/ / call detectmobile;/ / return(pipe);/ /}/ /sub vcl_pass {/ / call detectmobile;/ / return(pass);/ /}/ Thanks for your advice, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Thu Mar 31 13:58:26 2011 From: straightflush at gmail.com (AD) Date: Thu, 31 Mar 2011 07:58:26 -0400 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: use -C to display the default VCL , or just put in a command you want to do in C inside the vcl and then see how it looks when running -C -f against the config. On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: > I would like to do some more advanced functionality within our VCL file, > and to do so need to use the inline-c capabilities of varnish. So, to start > off with, I thought I would get my VCL file to set the headers, so I can > test variables, and be sure it is working. But am getting a WSOD when I > impliment my seemingly simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for beginners to > the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > *sub detectmobile {* > * C{* > * VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);* > * }C* > *}* > *sub vcl_miss {* > * call detectmobile;* > * return(fetch);* > *}* > *sub vcl_pipe {* > * call detectmobile;* > * return(pipe);* > *}* > *sub vcl_pass {* > * call detectmobile;* > * return(pass);* > *}* > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at streppone.it Thu Mar 31 13:58:54 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 31 Mar 2011 22:58:54 +1100 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > The snipper from my current code is: Is it correct to do this in all vcl_miss, pipe and pass? What about vcl_hit then? I would have expected this to happen in vcl_deliver() or vcl_fetch() if you want your backends to see the header you're setting. Anyway... > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ I believe you have a problem in the "\020" bit. That is octal notation. "X-Whatever:" is 11 bytes, including the ':', so you need to write: "\013X-Whatever:" Have fun, -- Cosimo From ttischler at homeaway.com Wed Mar 30 20:46:43 2011 From: ttischler at homeaway.com (Tim Tischler) Date: Wed, 30 Mar 2011 13:46:43 -0500 Subject: varnish as traffic director In-Reply-To: Message-ID: We first started using varnish for caching during a superbowl advertisement, and then when we no longer needed the caching, we keep using it as our load balancer. We're now using it as a A/B testing system between static builds with a number of different rules. We've written a ruby DSL to generate the common rules and inject the the GUID hashes that uniquely identify the A vs. the B builds. We are also routing path prefixes to various additional applications. I've been extremely happy with the speed, the stability, and the flexibility of varnish as a load balancer/content switch, even without caching. -T On 3/30/11 2:23 AM, "Per Buer" wrote: > On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky > wrote: >> Thanks! >> If I have 100 of different rules, I would have a very big if block, right? >> Is this a common use case for varnish? > > Yes. It's quite common to have a lot of logic. Don't worry about it, > VCL is executed at light speed. From mhettwer at team.mobile.de Thu Mar 31 14:48:42 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 13:48:42 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: Hej there, On 30.03.11 04:52, "AD" wrote: >sub vcl_recv { > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > >} By the way, would it also be okay to write it like that? sub vcl_recv { set req.backend = catchall-backend; if (req.url ~ "^/search") { set req.backend = search-backend; } if (req.url ~ "^/items") { set req.backend = items-backend; } } Obviously with the addition of the catchall-backend. Cheers, Marian From rtshilston at gmail.com Thu Mar 31 14:53:49 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Thu, 31 Mar 2011 13:53:49 +0100 Subject: varnish as traffic director In-Reply-To: References: Message-ID: > > On 30.03.11 04:52, "AD" wrote: > >> sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } Logically it's ok. But, it's probably slightly better in terms of efficiency to use an elseif pattern. This is you'll do the first pattern match (/search) on every request, and then you'll also do the pattern match for items, even if you'd already matched to /search. Two pattern matches rather than one is undesirable, and even more so if you ended up having lots and lots of matches to do. Rob From ionathan at gmail.com Thu Mar 31 14:59:33 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Thu, 31 Mar 2011 09:59:33 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! It is great to know about real life implementations. Do you have any good way to test rules in your dev env? Is there any benchmark on varnish vs. nginx in regard of load balancing? On 3/31/11, Hettwer, Marian wrote: > Hej there, > > > > > On 30.03.11 04:52, "AD" wrote: > >>sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >>} > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > > Obviously with the addition of the catchall-backend. > > Cheers, > Marian > > From mhettwer at team.mobile.de Thu Mar 31 15:06:09 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:06:09 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:53, "Robert Shilston" wrote: >> >> On 30.03.11 04:52, "AD" wrote: >> >>> sub vcl_recv { >>> if (req.url ~ "^/search") { >>> set req.backend = search-backend; >>> } >>> elseif (req.url ~ "^/items") { >>> set req.backend = items-backend; >>> } >>> >>> } >> >> By the way, would it also be okay to write it like that? >> >> sub vcl_recv { >> >> set req.backend = catchall-backend; >> >> >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> if (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > >Logically it's ok. But, it's probably slightly better in terms of >efficiency to use an elseif pattern. This is you'll do the first pattern >match (/search) on every request, and then you'll also do the pattern >match for items, even if you'd already matched to /search. Two pattern >matches rather than one is undesirable, and even more so if you ended up >having lots and lots of matches to do. Understood! Thanks for your explanation :-) Cheers, Marian From mhettwer at team.mobile.de Thu Mar 31 15:09:22 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:09:22 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:59, "Jonathan Leibiusky" wrote: >Thanks! It is great to know about real life implementations. >Do you have any good way to test rules in your dev env? >Is there any benchmark on varnish vs. nginx in regard of load balancing? Some Real-Life numbers. We have 4 varnishes deployed in front of a big german classifieds site. Each varnish is doing 1200 requests/second and according to munin, each machine is nearly idle. (cpu load at 4% out of 800%). Hardware is HP blades with 8 cores and 8 gig ram. I wouldn't bother to try out nginx. If nginx is in the same league like varnish, I probably couldn't spot a difference anyway ;-) Besides, I'm really happy with varnish. Sorry, no real-life infos about nginx here... Regards, Marian From dan at retrobadger.net Thu Mar 31 15:25:13 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:25:13 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D9480B9.4060101@retrobadger.net> On 31/03/11 12:58, Cosimo Streppone wrote: > On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > >> The snipper from my current code is: > > Is it correct to do this in all > vcl_miss, pipe and pass? > What about vcl_hit then? > > I would have expected this to happen in vcl_deliver() > or vcl_fetch() if you want your backends to see > the header you're setting. > > Anyway... > >> /sub detectmobile {/ >> / C{/ >> / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end);/ >> / }C/ > > I believe you have a problem in the "\020" bit. > That is octal notation. > > "X-Whatever:" is 11 bytes, including the ':', > so you need to write: > > "\013X-Whatever:" > > Have fun, > Sadly no luck with that, I have ammended my code as recommended. Varnish is still able to restart without errors, but WSOD on page load. My custom function is now something: sub detectmobile { C{ VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end); }C } And the only occurance of 'call detectmobile;' is in: sub vcl_deliver {} Are there any libraries required for the VRT scripts to work? Do I need to alter the /etc/varnish/default file for C to work in varnish? From dan at retrobadger.net Thu Mar 31 15:28:21 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:28:21 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D948175.3000503@retrobadger.net> On 31/03/11 12:58, AD wrote: > use -C to display the default VCL , or just put in a command you want > to do in C inside the vcl and then see how it looks when running -C -f > against the config. > > > On Thu, Mar 31, 2011 at 7:39 AM, Dan > wrote: > > I would like to do some more advanced functionality within our VCL > file, and to do so need to use the inline-c capabilities of > varnish. So, to start off with, I thought I would get my VCL file > to set the headers, so I can test variables, and be sure it is > working. But am getting a WSOD when I impliment my seemingly > simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for > beginners to the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ > /}/ > /sub vcl_miss {/ > / call detectmobile;/ > / return(fetch);/ > /}/ > /sub vcl_pipe {/ > / call detectmobile;/ > / return(pipe);/ > /}/ > /sub vcl_pass {/ > / call detectmobile;/ > / return(pass);/ > /}/ > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish file? Is this required to use inline-c within my vcl file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Thu Mar 31 20:09:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 31 Mar 2011 11:09:24 -0700 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D9480B9.4060101@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D9480B9.4060101@retrobadger.net> Message-ID: On Thu, Mar 31, 2011 at 06:25, Dan wrote: > Sadly no luck with that, I have ammended my code as recommended. Varnish > is still able to restart without errors, but WSOD on page load. My custom > function is now something: > The length of your header is 20 characters including the colon. 013 is the length (in octal) of the X-Whatever: example provided to explain this to you, it is not octal for 20. Replace 013 with 024 to avoid segfaults. There are docs covering this on the website, BTW. What on earth is a WSOD? -- kb > > sub detectmobile { > C{ > VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end); > }C > } > > And the only occurance of 'call detectmobile;' is in: > sub vcl_deliver {} > > Are there any libraries required for the VRT scripts to work? > > Do I need to alter the /etc/varnish/default file for C to work in varnish? > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thebog at gmail.com Thu Mar 31 21:46:33 2011 From: thebog at gmail.com (thebog) Date: Thu, 31 Mar 2011 21:46:33 +0200 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D948175.3000503@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D948175.3000503@retrobadger.net> Message-ID: I think he meant -C in the commandline. Not inside the file. YS Anders Berg On Thu, Mar 31, 2011 at 3:28 PM, Dan wrote: > On 31/03/11 12:58, AD wrote: > > use -C to display the default VCL , or just put in a command you want to do > in C inside the vcl and then see how it looks when running -C -f against the > config. > > On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: >> >> I would like to do some more advanced functionality within our VCL file, >> and to do so need to use the inline-c capabilities of varnish.? So, to start >> off with, I thought I would get my VCL file to set the headers, so I can >> test variables, and be sure it is working.? But am getting a WSOD when I >> impliment my seemingly simple code. >> >> >> So my main questions are: >> * Are there any good docs on the VRT variables >> * Are there examples/tutorials on using C within the VCL (for beginners to >> the subject) >> * Is there something obvious I have overlooked when writing my code >> >> >> The snipper from my current code is: >> sub detectmobile { >> ? C{ >> ??? VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end); >> ? }C >> } >> sub vcl_miss { >> ? call detectmobile; >> ? return(fetch); >> } >> sub vcl_pipe { >> ? call detectmobile; >> ? return(pipe); >> } >> sub vcl_pass { >> ? call detectmobile; >> ? return(pass); >> } >> >> >> Thanks for your advice, >> Dan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish > file?? Is this required to use inline-c within my vcl file? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From gojomo at archive.org Tue Mar 1 01:09:04 2011 From: gojomo at archive.org (Gordon Mohr) Date: Mon, 28 Feb 2011 16:09:04 -0800 Subject: Practical VCL limits; giant URL->backend map Message-ID: <4D6C3920.6030708@archive.org> The quite-possibly-nutty idea has occurred to me of auto-generating a VCL that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of what are effectively 621 backend locations. (The mapping is essentially arbitrary.) Essentially, it would be replacing a squid url_rewrite_program. Am I likely to hit any hard VCL implementation limits (in depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) if my VCL is ~100-200MB in size? Am I overlooking some other more simple way to have varnish consult an arbitrary mapping (something similar to a squid url_rewrite_program)? Thanks for any warnings/ideas. - Gordon From perbu at varnish-software.com Tue Mar 1 08:45:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 1 Mar 2011 08:45:03 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On Tue, Mar 1, 2011 at 1:09 AM, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) Wow. > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? Yeah. Take a look at the DNS director. Just put your backends in a zone, point Varnish at it and Bob's your uncle. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From pom at dmsp.de Tue Mar 1 09:09:35 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Tue, 01 Mar 2011 09:09:35 +0100 Subject: Release date varnish 3.0? Message-ID: <4D6CA9BF.3060406@dmsp.de> Hi all, I am curious and maybe I missed something, but is there a planned release date for varnish 3.0? Have a nice tuesday, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From phk at phk.freebsd.dk Tue Mar 1 09:53:05 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 01 Mar 2011 08:53:05 +0000 Subject: Release date varnish 3.0? In-Reply-To: Your message of "Tue, 01 Mar 2011 09:09:35 +0100." <4D6CA9BF.3060406@dmsp.de> Message-ID: <12068.1298969585@critter.freebsd.dk> In message <4D6CA9BF.3060406 at dmsp.de>, Stefan Pommerening writes: >Hi all, > >I am curious and maybe I missed something, but is there a planned = >release date for varnish 3.0? It depends a lot on how much we can get people to test the code first. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Tue Mar 1 10:36:47 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 01 Mar 2011 10:36:47 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> (Gordon Mohr's message of "Mon, 28 Feb 2011 16:09:04 -0800") References: <4D6C3920.6030708@archive.org> Message-ID: <87mxlfrxlc.fsf@qurzaw.varnish-software.com> ]] Gordon Mohr | Am I likely to hit any hard VCL implementation limits (in | depth-of-conditional-nesting, overall size, VCL compilation overhead, | etc.) if my VCL is ~100-200MB in size? It will probably take a while to compile. Somebody at VUG3 mentioned a 50MB VCL for similar reasons and it took a little bit to compile and he wanted some evil hacks to be able to distribute the compiled VCL due to that, but apart from that, I believe it worked well enough. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From jbooher at praxismicro.com Tue Mar 1 18:18:31 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Tue, 1 Mar 2011 12:18:31 -0500 Subject: Varnish Cache on multi account VPS Message-ID: Curious as to weather the varnish cache can be restricted to use on only one account in CPanel? Jeff Booher p 440.549.0049 | f: 440.549.5695 www.praxismicro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssm at redpill-linpro.com Wed Mar 2 00:07:05 2011 From: ssm at redpill-linpro.com (Stig Sandbeck Mathisen) Date: Wed, 02 Mar 2011 00:07:05 +0100 Subject: Varnish Cache on multi account VPS In-Reply-To: (Jeff Booher's message of "Tue, 1 Mar 2011 12:18:31 -0500") References: Message-ID: <87wrkio2xy.fsf@mavis.fnord.no> Jeff Booher writes: > Curious as to weather the varnish cache can be restricted to use on > only one account in CPanel? I think you may want to supply a bit more context for your question. It is hard to figure out exactly what you mean. -- Stig Sandbeck Mathisen Redpill Linpro AS From lampe at hauke-lampe.de Wed Mar 2 00:13:47 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 00:13:47 +0100 Subject: caching of restarted requests possible? Message-ID: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Hi. I have a virtual host "images.example.com" served from two backends: - a backend "archive" which contains the bulk of images on fast read-only storage - a backend "updates" holding additions and updates A request for /foo.jpg should check the update backend first, even if the image was previously cached from the archive backend. A 404 status from the update backend restarts the request and fetches the image from the archive backend or delivers a cached copy. My code so far is at: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl It basically does what I want, but because the update backend's 404 is not stored when vcl_fetch returns restart, it sends a backend query for every request. I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" For now, I use the original URL and Host: header as hash key and reduced the cache TTL. That works well enough, even though it produces more traffic on the backends than would be necessary. Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? Hauke From l.barszcz at gadu-gadu.pl Wed Mar 2 08:13:30 2011 From: l.barszcz at gadu-gadu.pl (=?UTF-8?B?xYF1a2FzeiBCYXJzemN6IC8gR2FkdS1HYWR1?=) Date: Wed, 02 Mar 2011 08:13:30 +0100 Subject: caching of restarted requests possible? In-Reply-To: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Message-ID: <4D6DEE1A.80609@gadu-gadu.pl> Hi, On 02.03.2011 00:13, Hauke Lampe wrote: > I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. > > I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" Check out patch attached to ticket http://varnish-cache.org/trac/ticket/412 which changes behavior to what you need. > Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? You can store data custom header in req, like req.http.X-My-State. req.* is accessible from every function in vcl, so you can store your state in there - it persists across restarts. -- ?ukasz Barszcz web architect / Pion Aplikacji Internetowych GG Network S.A ul. Kamionkowska 45 03-812 Warszawa tel.: +48 22 514 64 99 fax.: +48 22 514 64 98 gg:16210 Sp??ka zarejestrowana w S?dzie Rejonowym dla m. st. Warszawy, XIII Wydzia? Gospodarczy KRS pod numerem 0000264575, NIP 867-19-48-977. Kapita? zak?adowy: 1 758 461,10 z? - wp?acony w ca?o?ci. From thereallove at gmail.com Tue Mar 1 17:54:32 2011 From: thereallove at gmail.com (Dan Gherman) Date: Tue, 1 Mar 2011 11:54:32 -0500 Subject: Varnish, between Zeus and Apache Message-ID: Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 2 09:09:58 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 2 Mar 2011 09:09:58 +0100 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: My guess is that Zeus may also set other headers that identify it to the apache module, and somehow get stripped by Varnish. You should check that out. Otherwise another solution may be placing Varnish in front of Zeus, if that does not affect your cluster setup. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Dan Gherman Sent: Tuesday, March 01, 2011 5:55 PM To: varnish-misc at varnish-cache.org Subject: Varnish, between Zeus and Apache Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish at mm.quex.org Wed Mar 2 09:32:42 2011 From: varnish at mm.quex.org (Michael Alger) Date: Wed, 2 Mar 2011 16:32:42 +0800 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: <20110302083242.GA26131@grum.quex.org> On Tue, Mar 01, 2011 at 11:54:32AM -0500, Dan Gherman wrote: > I am confronting with this situation: I manage a Zeus load-balancer > cluster who has Apache as a webserver on the nodes in the backend. > When Zeus load-balances a connection to an Apache server or > Apache-based application, the connection appears to originate from the > Zeus machine.Zeus provide an Apache module to work around this. Zeus > automatically inserts a special 'X-Cluster-Client-Ip' header into each > request, which identifies the true source address of the request. > [...] > Is there a way to have a workaround, like that Apache module, so I can > then send to Apache the true source address of the request? My > error.log is flooded with the usual messages " Ignoring > X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine > 'node_ip' It sounds like Varnish is sending the headers it receives, but the Apache module only respects the X-Cluser-Client-IP header when it's received from a particular IP address(es). See if there's a way to configure the module to accept it from Varnish, i.e. as if Varnish is the load-balancer. There's probably some existing configuration which has the IP address of the Zeus load-balancer. From lampe at hauke-lampe.de Wed Mar 2 12:39:08 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 12:39:08 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D6E2C5C.9020704@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. That looks promising, thanks. I'll give it a try. I hadn't thought of using vcl_hit to restart the request, either. That might solve the crash I encountered with restarting from within vcl_deliver. Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk1uLFYACgkQKIgAG9lfHFMeEgCfTIfBp9FzLUjj7uPDrgkSfleo q9MAn2Efxy7kmRb3uMN560zjSsih2nob =mejt -----END PGP SIGNATURE----- From l at lrowe.co.uk Wed Mar 2 15:08:03 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 2 Mar 2011 14:08:03 +0000 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On 1 March 2011 00:09, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) > > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? > > Thanks for any warnings/ideas. With that many entries, I expect you'll find that configuration will be quite slow, as there are no index structures in VCL and it compiles down to simple procedural C code. I think you'd be better off taking the approach of integrating with an external database library for the lookup. This blog pos shows how to search for values in an xml file http://www.enrise.com/2011/02/mobile-device-detection-with-wurfl-and-varnish/ but I expect you'll see better performance using sqlite or bdb. Laurence From romain.ledisez at netensia.fr Wed Mar 2 17:46:22 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Wed, 02 Mar 2011 17:46:22 +0100 Subject: Varnish burns the CPU and eat the RAM Message-ID: <1299084382.2658.200.camel@romain.v.netensia.net> Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ -------------- next part -------------- backend customer1 { .host = "customer1.hoster.net"; .port = "80"; } sub vcl_recv { # # Normalisation des URL # # Normaliser les URL envoy?s par curls -X et LWP if( req.url ~ "^http://" ) { set req.url = regsub(req.url, "http://[^/]*", ""); } # Normaliser l'h?te (domain.tldx -> www.domain.tld) if( req.http.host == "customer1.com" || req.http.host ~ "^(www\.)?customer1\.net$" ) { set req.http.redir = "http://www.customer1.com" req.url; error 750 req.http.redir; } # # Configuration des sites # # R?gles sp?cifiques ? Customer1 if( req.http.host == "www.customer1.com" ) { set req.backend = customer1; # Supprimer l'ent?te Cookie envoy?e par le navigateur remove req.http.Cookie; # OK pour le moment (voir quand la version mobile sera OK) remove req.http.Accept; remove req.http.Accept-Language; remove req.http.Accept-Charset; remove req.http.User-Agent; } # # R?gles g?n?riques adapt?es ? tous les sites # # P?riode de gr?ce : continue de servir le contenu apr?s son expiration du cache # (par exemple le temps de refaire la requ?te vers le backend ou de le deplanter) set req.grace = 3600s; # Normaliser l'ent?te Accept-Encoding if( req.http.Accept-Encoding ) { if( req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|mp4|flv)$" ) { # Ne pas compresser les fichiers d?j? compress?s remove req.http.Accept-Encoding; } elsif( req.http.Accept-Encoding ~ "gzip" ) { set req.http.Accept-Encoding = "gzip"; } elsif( req.http.Accept-Encoding ~ "deflate" ) { set req.http.Accept-Encoding = "deflate"; } else { # Supprimer les algorithmes inconnus remove req.http.Accept-Encoding; } } # Purger l'URL du cache si elle se termine par le param?tre purge if( req.url ~ "~purge$" ) { # Suppression du suffixe "~purge" puis purge de l'URL set req.url = regsub(req.url, "(.*)~purge$", "\1"); purge_url( req.url ); } } # # Appell? apr?s rec?ption de la r?ponse du backend # sub vcl_fetch { # Supprimer l'ent?te Set-Cookie envoy?e par le serveur remove beresp.http.Set-Cookie; } # # Appell? avant l'envoi d'un contenu du cache # sub vcl_deliver { remove resp.http.Via; remove resp.http.X-Varnish; remove resp.http.Server; remove resp.http.X-Powered-By; remove resp.http.P3P; } # # "Catching" des erreurs # sub vcl_error { if( obj.status == 750 ) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } -------------- next part -------------- client_conn 136529 117.80 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 136532 117.80 Client requests received cache_hit 136531 117.80 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1 0.00 Cache misses backend_conn 1 0.00 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 1 0.00 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 7720 . N struct sess_mem n_sess 18446744073709551606 . N struct sess n_object 1 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 481 . N struct objectcore n_objecthead 481 . N struct objecthead n_smf 3 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 480 . N worker threads n_wrk_create 480 0.41 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 81 0.07 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 166 0.14 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 6 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 136429 117.71 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 136534 117.80 Total Sessions s_req 136534 117.80 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1 0.00 Total fetch s_hdrbytes 30885129 26648.08 Total header bytes s_bodybytes 41097938 35459.83 Total body bytes sess_closed 136538 117.81 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 4233973 3653.13 SHM records shm_writes 547445 472.34 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 46002 39.69 SHM MTX contention shm_cycles 1 0.00 SHM cycles through buffer sm_nreq 2 0.00 allocator requests sm_nobj 2 . outstanding allocations sm_balloc 8192 . bytes allocated sm_bfree 2574852096 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 1 0.00 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 136442 117.72 HCB Lookups without lock hcb_lock 1 0.00 HCB Lookups with lock hcb_insert 1 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1159 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From rdelsalle at gmail.com Wed Mar 2 18:17:07 2011 From: rdelsalle at gmail.com (Roch Delsalle) Date: Wed, 2 Mar 2011 18:17:07 +0100 Subject: Varnish & Multibrowser Message-ID: Hi, I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. (eg. if a div is hidden for Internet Explorer users) Will it cache it randomly or is will it be able to notice the difference ? Thanks, -- D-Ro.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Mar 2 18:26:05 2011 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Wed, 2 Mar 2011 09:26:05 -0800 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: <2FC7B8E9-2D6C-4B0E-BF8F-E32A8840F68B@develooper.com> On Mar 2, 2011, at 9:17, Roch Delsalle wrote: > I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference ? You have to add a token to the cache key based on "was this MSIE". (Or have the developers do it with CSS or JS instead ...) - ask From scaunter at topscms.com Wed Mar 2 19:55:38 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 2 Mar 2011 13:55:38 -0500 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Hello: You do not have return(lookup); in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. We (have to) assume this is a 64bit OS. -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Romain LE DISEZ Sent: March-02-11 11:46 AM To: varnish-misc at varnish-cache.org Subject: Varnish burns the CPU and eat the RAM Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ From nkinkade at creativecommons.org Thu Mar 3 01:40:34 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 19:40:34 -0500 Subject: return(lookup) in vcl_recv() to cache requests with cookies? Message-ID: This seems like one of those perennial questions where the required reply is RTFM or "review the list archives because it's been asked thousands of times", but for whatever reason, I can't find an answer to this aspect of caching requests with cookies. In the examples section of the 2.1 VCL reference (we're running 2.1.5) there is an example for how to force Varnish to cache requests that have cookies: http://www.varnish-cache.org/docs/2.1/reference/vcl.html#examples The instruction is to to return(lookup) in vcl_recv. However, I have found that that doesn't work for me. The only way I can seem to get Varnish 2.1.5 to cache a request with a cookie is to remove the Cookie: header in vcl_recv. Other docs I found also seem to indicate that return(lookup) should work. For example: http://www.varnish-cache.org/trac/wiki/VCLExampleCacheCookies#Cachingbasedonfileextensions There are also loads of other examples on the 'net that indicate that return(lookup) in vcl_recv should work. I though maybe it was cache control headers returned by the backend causing it not to cache, but I tried stripping all those out and it still didn't cache. Am I just missing something here, or is the documentation not fully complete? I don't necessarily want to strip cookies. I just want to cache, or not, based on some regular expression matching the Cookie: header sent by the client. Thanks, Nathan From cosimo at streppone.it Thu Mar 3 02:22:52 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 03 Mar 2011 12:22:52 +1100 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: On Thu, 03 Mar 2011 04:17:07 +1100, Roch Delsalle wrote: > Hi, > > I would like to know how Varnish would behave if a web page is different > depending on the browser accessing it. Varnish doesn't know that unless you instruct it to. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference It will cache regardless of the content of the page, but according to: 1) vcl_hash(), which defaults to URL + cookies I believe 2) HTTP Vary header of the backend response So basically you have to tell Varnish what you want, and possibly stay consistent between VCL and how your designers make different pages for different browsers. I tried to put together an example based on what we use: http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizeUserAgent YMMV, -- Cosimo From jbooher at praxismicro.com Thu Mar 3 02:28:24 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Wed, 2 Mar 2011 20:28:24 -0500 Subject: Varnish Cache on multi account VPS Message-ID: I have 5 sites on the VPS. I want to only use Varnish on one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkinkade at creativecommons.org Thu Mar 3 02:59:10 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 20:59:10 -0500 Subject: Varnish Cache on multi account VPS In-Reply-To: References: Message-ID: This may not be the only, or even the best, way to go about this, but the thing that immediately occurs to me is to wrap your VCL rules for vcl_recv() in something like: sub vcl_recv { if ( req.http.host == "my.varnish.host" ) { [do something] } } Nathan On Wed, Mar 2, 2011 at 20:28, Jeff Booher wrote: > I have 5 sites on the VPS. I want to only use Varnish on one. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From andy at suburban-glory.com Thu Mar 3 09:28:24 2011 From: andy at suburban-glory.com (Andy Walpole) Date: Thu, 03 Mar 2011 08:28:24 +0000 Subject: 403 error message Message-ID: <4D6F5128.8020707@suburban-glory.com> Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole */Create and do what is new, through and through/* -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 10:13:45 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 10:13:45 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Message-ID: <1299143625.2628.41.camel@romain.v.netensia.net> Hello Stefan, thanks for your attention. Le mercredi 02 mars 2011 ? 13:55 -0500, Caunter, Stefan a ?crit : > You do not have > > return(lookup); > > in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. As I understand, "return (lookup)" is automatically added because it is part of the default "vcl_recv", which is appended to the user vcl_recv. Nevertheless, I added it to the end of my "vcl_recv", it did not change the behaviour. > We (have to) assume this is a 64bit OS. You're right, it is a 64 bit CentOS. Greetings. -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From martin.boer at bizztravel.nl Thu Mar 3 11:01:36 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Thu, 03 Mar 2011 11:01:36 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <4D6F6700.5020604@bizztravel.nl> Hello Romain, Wat does happen when you limit the amount of memory/space used ? Say something like -s file,/mnt/varnish/varnish_storage.bin,7G Regards, Martin On 03/02/2011 05:46 PM, Romain LE DISEZ wrote: > Hello all, > > I'm pretty new to Varnish. I'm deploying it because one of our customer > is going to have a special event and the website is pretty slow (I'm > working for an Internet hosting company). We are expecting more than > 1000 requests per seconds. > > From what I read here and there, this should not be a problem for > Varnish. > > My problem is that when Varnish is using cache ("deliver", as opposed to > "pass"), the CPU consumption increases drasticaly, also the RAM. > > The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. > > > With a simple test like this (robots.txt = 300 bytes) : > ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt > > CPU consumption is fluctuating between 120% and 160%. > > Second point is that Varnish consumes all the memory. Trying to limit > that, I made a tmpfs mountpoint of 3G : > mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ > > But varnish continues to consume all the memory > > My configuration is attached to this mail. Varnish is launched like > this : > /usr/sbin/varnishd -P /var/run/varnish.pid > -a :80 > -f /etc/varnish/default.vcl > -T 127.0.0.1:6082 > -t 120 > -w 120,120,120 > -u varnish -g varnish > -S /etc/varnish/secret > -s file,/mnt/varnish/varnish_storage.bin,100% > -p thread_pools 4 > > I also tried to launch it with parameter "-h classic" > > It is installed on a Centos 5 up to date, with lastest RPMs provided by > the varnish repository. > > If I put a return (pass) in vcl_fetch, everything is fine (except the > backend server, of course). It makes me think, with my little knowledges > of Varnish, that the problem is in the delivering from cache. > > Output of "varnishstat -1", when running ab, is attached. > > Thanks for your help. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Thu Mar 3 14:10:40 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Thu, 3 Mar 2011 13:10:40 +0000 Subject: 403 error message In-Reply-To: <4D6F5128.8020707@suburban-glory.com> References: <4D6F5128.8020707@suburban-glory.com> Message-ID: <1870656644209048894@unknownmsgid> On 3 Mar 2011, at 08:29, Andy Walpole wrote: Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole *Create and do what is new, through and through* _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc 403 points at your backend telling varnish it is forbidden. If varnish is giving you that error it is working and the backend is giving it 403. I've seen this before if backend apps use some sort of rate limiting per ip as by default when you add varnish to an existing setup the ip that gets passed to the backend is the varnish ip not the source ip. You could try passing the ip as x-forwarded-for Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From camcima at hotmail.com Thu Mar 3 15:33:28 2011 From: camcima at hotmail.com (Carlos Cima) Date: Thu, 3 Mar 2011 11:33:28 -0300 Subject: Grace Message-ID: Hi, Is there any way to check if a particular request was answered "in grace" by sending a HTTP header? I'm trying to increase the grace period if the user-agent contains "Googlebot" in order to speed up crawling response time and consequently be better positioned in Google organic search results. When I access using Googlebot in the user-agent header I'm not sure if Varnish is waiting for a backend request or not. VCL excerpt: sub vcl_recv { ... # Set Grace if (req.http.user-agent ~ "Googlebot") { set req.grace = 12h; } else { set req.grace = 30m; } ... } sub vcl_fetch { ... # Set Grace set beresp.grace = 12h; ... } Thanks, Carlos Cima -------------- next part -------------- An HTML attachment was scrubbed... URL: From shib4u at gmail.com Thu Mar 3 17:23:41 2011 From: shib4u at gmail.com (Shibashish) Date: Thu, 3 Mar 2011 21:53:41 +0530 Subject: Cache dynamic URLs Message-ID: Hi All, My "varnishtop -b -i TxURL" shows... 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 9.99 TxURL /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 6.00 TxURL /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 5.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 3.99 TxURL /xyz.php?id=2182&status=1 3.00 TxURL /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 3.00 TxURL /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 2.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 How can i cache those dynamic pages in Varnish, say for 30 sec ? Thanks. -- Shib -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhalfmoon at milksnot.com Thu Mar 3 17:29:04 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Thu, 03 Mar 2011 17:29:04 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <1870656644209048894@unknownmsgid> References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> Message-ID: <4D6FC1D0.7020508@milksnot.com> Hiya, today I upgraded a few Varnish servers from v2.1.3 to v2.1.5. The machines are purring along nicely, but I did notice something curious on in the server's statistics: the backend_toolate is down from a very wobbly average of 20p/s too a constant 0.7p/s. Also , the object & object head count are way down. n_lru_nuked is also down from an average of 10p/s to zero. Hitrate is unaffected and performance is slightly up (a few percent less cpuload on high-traffic moments). This is no temporary effect, because I've seen it on another machine in the same cluster, which I upgraded a week ago. I did a quick comparison between 2.1.3 and 2.1.5 of varnishadm's 'param.show' and also a quick scan of the sourcecode of 2.1.3 & 2.1.5, but couldn't find any parameter defaults that might have been changed between versions. It's not causing any issues here, other that a bit more performance. I'm just curious: Does anybody know what's going on here? Cheers, Johhny -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-backendtoolate.jpg Type: image/jpeg Size: 25299 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-headcount.jpg Type: image/jpeg Size: 19799 bytes Desc: not available URL: From isharov at yandex-team.ru Thu Mar 3 17:35:23 2011 From: isharov at yandex-team.ru (Iliya Sharov) Date: Thu, 03 Mar 2011 19:35:23 +0300 Subject: Cache dynamic URLs In-Reply-To: References: Message-ID: <4D6FC34B.5010209@yandex-team.ru> Hi. May be sub vcl_hash { set req.hash +=req.url; return(hash); } sub vcl_fetch { if (req.url ~ "(php") { set beresp.ttl =30s;} } and -p lru_interval=30 in command-line run options? Wbr, Iliya Sharov 03.03.2011 19:23, Shibashish ?????: > Hi All, > > My "varnishtop -b -i TxURL" shows... > > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 9.99 TxURL > /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 6.00 TxURL > /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 5.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 > 3.99 TxURL /xyz.php?id=2182&status=1 > 3.00 TxURL > /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > > How can i cache those dynamic pages in Varnish, say for 30 sec ? > > Thanks. > > -- > Shib > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 17:57:11 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 17:57:11 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <4D6F6700.5020604@bizztravel.nl> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> Message-ID: <1299171431.2628.47.camel@romain.v.netensia.net> Hello Martin, Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : > Wat does happen when you limit the amount of memory/space used ? Say > something like > > -s file,/mnt/varnish/varnish_storage.bin,7G I did that : # free -m total used free Mem: 7964 156 7807 -/+ buffers/cache: 156 7807 # /etc/init.d/varnish start Starting varnish HTTP accelerator: [ OK ] # free -m total used free Mem: 7964 5044 2920 -/+ buffers/cache: 5044 2920 # ps uax | grep varnish /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 Even with a limit of 1G, it consumes 5G of RAM. Could it be related to the number of thread ? -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From thebog at gmail.com Thu Mar 3 18:11:10 2011 From: thebog at gmail.com (thebog) Date: Thu, 3 Mar 2011 18:11:10 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299171431.2628.47.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> <1299171431.2628.47.camel@romain.v.netensia.net> Message-ID: The storage you are assigning is the storage for objects, not memoryspace. When it comes to how much memory Varnish uses, it's assigned by the OS. There is a big difference of how much is uses and how much it's assigned by the OS (normally). Use the top command and read the difference between whats actually used and how much is reserved. Read: http://www.varnish-cache.org/docs/2.1/faq/general.html for more info around that. In short, Varnish is using modern OS technics to find the "right" balance, and therefore memory should not be an issue. The burning of CPU is not correct, but I don't have any good pointers there. Join the irc channel, and ask if someone there can help you out. YS Anders Berg On Thu, Mar 3, 2011 at 5:57 PM, Romain LE DISEZ wrote: > Hello Martin, > > Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : >> Wat does happen when you limit the amount of memory/space used ? Say >> something like >> >> -s file,/mnt/varnish/varnish_storage.bin,7G > > I did that : > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? ?156 ? ? ? 7807 > -/+ buffers/cache: ? ? ? ?156 ? ? ? 7807 > > # /etc/init.d/varnish start > Starting varnish HTTP accelerator: ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?] > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? 5044 ? ? ? 2920 > -/+ buffers/cache: ? ? ? 5044 ? ? ? 2920 > > # ps uax | grep varnish > /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 > > Even with a limit of 1G, it consumes 5G of RAM. Could it be related to > the number of thread ? > > > -- > Romain LE DISEZ > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ruben.ortiz at itnet.es Thu Mar 3 16:18:57 2011 From: ruben.ortiz at itnet.es (=?iso-8859-1?Q?Rub=E9n_Ortiz?=) Date: Thu, 3 Mar 2011 16:18:57 +0100 Subject: Varnish Thread_Pool_Max. How to increase? Message-ID: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Hello Firstable my Varnish version varnishd (varnish-2.0.4) Linux Kernel 2.6.18-028stab070.14 I'm really new to Varnish. I want to configure it in my way, tunning some parameters but I don't know how. I have this setup: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -w 100,2000 \ -s file,/var/lib/varnish/varnish_storage.bin,2G" Theorically, with -w 100,2000 I'm telling to Varnish to increase its defaults values for thread_pool_min,thread_pool_max and yes, when I check stats with param.show I can see the changes. Previously, I have reboted varnish daemon. But How can I increase thread_pool_max? I was able to change in admin console, but when I reboot service, the param backs to its default config (2) Thanks people! Rub?n Ortiz Administrador de Sistemas Edificio Nova Gran Via Av. Gran V?a, 16-20, 2? planta | 08902 L'Hospitalet de Llobregat (Barcelona) T 902 999 343 | F 902 999 341 www.grupoitnet.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: grupo-itnet.jpg Type: image/jpeg Size: 7808 bytes Desc: not available URL: From liulu2 at leadsec.com.cn Fri Mar 4 03:14:01 2011 From: liulu2 at leadsec.com.cn (=?gb2312?B?wfXCtg==?=) Date: Fri, 4 Mar 2011 10:14:01 +0800 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 Message-ID: <201103041014015137100@leadsec.com.cn> jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2011-03-04 liulu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 07:08:36 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 01:08:36 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Message-ID: Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 12408 8.19 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5549280 3662.89 Client requests received cache_hit 5543904 3659.34 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 5376 3.55 Cache misses backend_conn 5373 3.55 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 3 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 5373 3.55 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 798 . N struct sess_mem n_sess 548 . N struct sess n_object 5373 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 5806 . N struct objectcore n_objecthead 5806 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 0.53 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 0 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 74099 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5543132 3658.83 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 12407 8.19 Total Sessions s_req 5549280 3662.89 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 5373 3.55 Total fetch s_hdrbytes 1245394845 822042.80 Total header bytes s_bodybytes 13448598673 8876962.82 Total body bytes sess_closed 43 0.03 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 5549262 3662.88 Session Linger sess_herd 1431702 945.02 Session herd shm_records 162564756 107303.47 SHM records shm_writes 7031015 4640.93 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 138344 91.32 SHM MTX contention shm_cycles 60 0.04 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 10746 7.09 SMA allocator requests sma_nobj 10746 . SMA outstanding allocations sma_nbytes 17538168 . SMA outstanding bytes sma_balloc 17538168 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 3 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 1254 . SMS bytes allocated sms_bfree 1254 . SMS bytes freed backend_req 5373 3.55 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 5540904 3657.36 HCB Lookups without lock hcb_lock 5375 3.55 HCB Lookups with lock hcb_insert 5373 3.55 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1515 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) From tfheen at varnish-software.com Fri Mar 4 08:33:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:33:56 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <4D6FC1D0.7020508@milksnot.com> (Johnny Halfmoon's message of "Thu, 03 Mar 2011 17:29:04 +0100") References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> <4D6FC1D0.7020508@milksnot.com> Message-ID: <87d3m7e3vf.fsf@qurzaw.varnish-software.com> ]] Johnny Halfmoon | It's not causing any issues here, other that a bit more | performance. I'm just curious: Does anybody know what's going on here? It could be the automatic retry of requests when the backend closes the connection at us. See commits 19966c023f3bba30c32187a0c432c1711ac25201 and f7a5d684ef8fa5352f5fe6f5a28f6fe45f72deb1 regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From perbu at varnish-software.com Fri Mar 4 08:48:05 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:48:05 +0100 Subject: Cache dynamic URLs In-Reply-To: <4D6FC34B.5010209@yandex-team.ru> References: <4D6FC34B.5010209@yandex-team.ru> Message-ID: On Thu, Mar 3, 2011 at 5:35 PM, Iliya Sharov wrote: > Hi. > May be > sub vcl_hash > { > set req.hash > +=req.url; > > > return(hash); > } > This part isn't necessary. > > sub vcl_fetch > { > if (req.url ~ "(php") { set beresp.ttl =30s;} > } > It will work. > and -p lru_interval=30 in command-line run options? > This is also not relevant. I wouldn't recommend screwing around with parameters unless it is called for and you're know what you're doing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 4 08:50:37 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:50:37 +0100 Subject: Varnish Thread_Pool_Max. How to increase? In-Reply-To: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> References: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Message-ID: 2011/3/3 Rub?n Ortiz > But How can I increase thread_pool_max? I was able to change in admin > console, but when I reboot service, the param backs to its default config > (2) > Take a look at the init script. It will probably source /etc/sysconfig/varnish or /etc/default/varnish and you can set the startup parameters there. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 4 08:56:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:56:24 +0100 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 In-Reply-To: <201103041014015137100@leadsec.com.cn> (=?utf-8?B?IuWImA==?= =?utf-8?B?6ZyyIidz?= message of "Fri, 4 Mar 2011 10:14:01 +0800") References: <201103041014015137100@leadsec.com.cn> Message-ID: <878vwve2tz.fsf@qurzaw.varnish-software.com> ]] "??" Hi, | jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2.4 is quite old, 2.4.20 was released in 2002 so you should upgrade to something newer. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From steve.webster at lovefilm.com Fri Mar 4 11:14:18 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 10:14:18 +0000 Subject: Processing ESIs in parallel Message-ID: Hi, We've been looking at using Varnish 2.1.5 with ESIs to allow us to cache the bulk of our page content whilst still generating the user-specific sections dynamically. The sticking point for us is that some of these page sections cannot be cached. It seems, based on both observed behaviour and a quick look at the code for ESI_Deliver, that Varnish is processing and requesting content for the ESIs serially rather than in parallel. I know there has been a lot of work on ESIs for Varnish 3, but as far as I can tell they are still processed serially. Are there any plans to switch to a parallel processing model? If not, might this be a worthy feature request for a future version of Varnish? Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksorensen at nordija.com Fri Mar 4 11:56:13 2011 From: ksorensen at nordija.com (Kristian =?ISO-8859-1?Q?Gr=F8nfeldt_S=F8rensen?=) Date: Fri, 04 Mar 2011 11:56:13 +0100 Subject: Processing ESIs in parallel In-Reply-To: References: Message-ID: <1299236173.21671.17.camel@kriller.nordija.dk> On Fri, 2011-03-04 at 10:14 +0000, Steve Webster wrote: > Hi, > > We've been looking at using Varnish 2.1.5 with ESIs to allow us to > cache the bulk of our page content whilst still generating the > user-specific sections dynamically. The sticking point for us is that > some of these page sections cannot be cached. It seems, based on both > observed behaviour and a quick look at the code for ESI_Deliver, that > Varnish is processing and requesting content for the ESIs serially > rather than in parallel. I would like to see that feature in varnish as well. In our case we are including up to several hundred objects from a single document, and due to the nature of our data, chances are that if the first included ESI-object is a miss, then most of the remaining ESI-objects will be misses, so it would be great to be able to request some of the objects in parallel to speed up delivery. Regards Kristian S?rensen From phk at phk.freebsd.dk Fri Mar 4 12:07:24 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 04 Mar 2011 11:07:24 +0000 Subject: Processing ESIs in parallel In-Reply-To: Your message of "Fri, 04 Mar 2011 10:14:18 GMT." Message-ID: <61061.1299236844@critter.freebsd.dk> In message , Steve Webster w rites: >I know there has been a lot of work on ESIs for Varnish 3, but as far as I >can tell they are still processed serially. Are there any plans to switch to >a parallel processing model? If not, might this be a worthy feature request >for a future version of Varnish?s I wouldn't call them "plans", but it is on our wish-list. It is not simple though, so don't hold your breath. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From pom at dmsp.de Fri Mar 4 12:34:45 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Fri, 04 Mar 2011 12:34:45 +0100 Subject: varnishreplay question Message-ID: <4D70CE55.20306@dmsp.de> Hi all, I try to use varnishreplay for the first time. What I did is the following: - run varnishlog without any parameters and produce some quite big logfile on a production varnish - copy the log file (from varnishlog) from production to a testing machine (running varnish of course) - fiddle around with vcl to direct traffic to some standby backends - call varnishreplay 'varnishreplay -D -a localhost:80 -r ' Unfortunately a varnishstat on this testing machine does not show me any activity and my only output on console is: 0x7f3d703b5700 thread 0x7f3d704d4710:1701999465 started 0x7f3d703b5700 thread 0x7f3d703b3710:540291889 started 0x7f3d703b5700 thread 0x7f3d703ab710:678913378 started 0x7f3d703b5700 thread 0x7f3d703a3710:540357940 started 0x7f3d703b5700 thread 0x7f3d7039b710:540161076 started 0x7f3d703b5700 thread 0x7f3d70393710:540292660 started 0x7f3d703b5700 thread 0x7f3d7038b710:540292149 started 0x7f3d703b5700 thread 0x7f3d70383710:1701014383 started 0x7f3d703b5700 thread 0x7f3d7037b710:540292919 started 0x7f3d703b5700 thread 0x7f3d70373710:540162097 started 0x7f3d703b5700 thread 0x7f3d7036b710:825110816 started 0x7f3d703b5700 thread 0x7f3d6f28e710:1852796537 started 0x7f3d703b5700 thread 0x7f3d6f286710:540162100 started 0x7f3d703b5700 thread 0x7f3d6f27e710:540095033 started 0x7f3d703b5700 thread 0x7f3d6f276710:540292147 started 0x7f3d703b5700 thread 0x7f3d6f26e710:540094774 started 0x7f3d703b5700 thread 0x7f3d6f266710:540487985 started 0x7f3d703b5700 thread 0x7f3d6f25e710:1107959840 started 0x7f3d703b5700 thread 0x7f3d6f256710:540423988 started 0x7f3d703b5700 thread 0x7f3d6f24e710:540357431 started 0x7f3d703b5700 thread 0x7f3d6f246710:540488244 started 0x7f3d703b5700 thread 0x7f3d6f23e710:540356662 started 0x7f3d703b5700 thread 0x7f3d6f236710:540488756 started 0x7f3d703b5700 thread 0x7f3d6f22e710:540160820 started 0x7f3d703b5700 thread 0x7f3d6f26e710 stopped 0x7f3d703b5700 thread 0x7f3d6f27e710 stopped 0x7f3d703b5700 thread 0x7f3d6f22e710 stopped 0x7f3d703b5700 thread 0x7f3d7039b710 stopped 0x7f3d703b5700 thread 0x7f3d70373710 stopped 0x7f3d703b5700 thread 0x7f3d6f286710 stopped 0x7f3d703b5700 thread 0x7f3d703b3710 stopped 0x7f3d703b5700 thread 0x7f3d6f276710 stopped 0x7f3d703b5700 thread 0x7f3d7038b710 stopped 0x7f3d703b5700 thread 0x7f3d70393710 stopped 0x7f3d703b5700 thread 0x7f3d7037b710 stopped 0x7f3d703b5700 thread 0x7f3d6f23e710 stopped 0x7f3d703b5700 thread 0x7f3d6f24e710 stopped 0x7f3d703b5700 thread 0x7f3d703a3710 stopped 0x7f3d703b5700 thread 0x7f3d6f256710 stopped 0x7f3d703b5700 thread 0x7f3d6f266710 stopped 0x7f3d703b5700 thread 0x7f3d6f246710 stopped 0x7f3d703b5700 thread 0x7f3d6f236710 stopped 0x7f3d703b5700 thread 0x7f3d703ab710 stopped 0x7f3d703b5700 thread 0x7f3d7036b710 stopped 0x7f3d703b5700 thread 0x7f3d6f25e710 stopped 0x7f3d703b5700 thread 0x7f3d70383710 stopped 0x7f3d703b5700 thread 0x7f3d704d4710 stopped 0x7f3d703b5700 thread 0x7f3d6f28e710 stopped Ehm, varnish on production machine is 2.1.3, on testing platform is 2.1.5. Well - I'm doing it wrong - I know... but, how to do it correctly? Any idea or hint? Thanks! Kind regards, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From steve.webster at lovefilm.com Fri Mar 4 13:39:25 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 12:39:25 +0000 Subject: Processing ESIs in parallel In-Reply-To: <61061.1299236844@critter.freebsd.dk> References: <61061.1299236844@critter.freebsd.dk> Message-ID: On 4 Mar 2011, at 11:07, Poul-Henning Kamp wrote: > In message , Steve Webster w > rites: > >> I know there has been a lot of work on ESIs for Varnish 3, but as far as I >> can tell they are still processed serially. Are there any plans to switch to >> a parallel processing model? If not, might this be a worthy feature request >> for a future version of Varnish?s > > I wouldn't call them "plans", but it is on our wish-list. This is good news. > It is not simple though, so don't hold your breath. Indeed. I had one of those "how hard could this be" moments and started trying to implement it myself, then realised I had opened a can of worms and decided to leave Varnish hacking to the experts. I have a workaround for now ? a custom Apache output filter that uses LWP::Parallel ? so thankfully breathe-holding isn't necessary. Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 15:43:42 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 09:43:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Check your ec2 network settings. OS and varnish settings look okay, your varnishstat shows that varnish is coasting along fine. It's not threads. You have 800 available, according to the varnishstat; it's running with 800 threads, handling 12,000+ connections, and there is no thread creation failure. Therefore it does not need to add threads. What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 18:47:10 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 14:47:10 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D71259E.1090305@gmail.com> # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } From nadahalli at gmail.com Fri Mar 4 19:22:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 13:22:58 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 20:25:12 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 14:25:12 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> There's no health check in the backend. Not sure what that does with a one hour grace. I set a short grace with if (req.backend.healthy) { set req.grace = 60s; } else { set req.grace = 4h; } You also don't appear to select a backend in recv. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:23 PM To: varnish-misc at varnish-cache.org Subject: Re: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 20:30:00 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 14:30:00 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > There?s no health check in the backend. Not sure what that does with a one > hour grace. I set a short grace with > > > > if (req.backend.healthy) { > > set req.grace = 60s; > > } else { > > set req.grace = 4h; > > } > I am still to add health-checks, directors, etc. Will add them soon. But those make sense if the cache-primed performance is good. In my test, I am requesting URLs who I know are already in the cache. Varnishstat also shows that - there are no cache misses at all. > > > You also don?t appear to select a backend in recv. > The default backend seems to be getting picked up automatically. -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:23 PM > > *To:* varnish-misc at varnish-cache.org > *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan > wrote: > > > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > > > The DNS requests are getting resolved super quick. I am unable to see any > other network issues with EC2. I have a similar machine in the same data > center running nginx which is doing similar loads, but with no caching > requirement, and it's running fine. > > In my first post, I forgot to attach my VCL, which is a bit too minimal. Am > I missing something obvious? > > ------ > backend default0 { > .host = "10.202.30.39"; > .port = "8000"; > } > > sub vcl_recv { > unset req.http.Cookie; > set req.grace = 3600s; > set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); > } > > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > } > ------------------------- > > Could there be some kind of TCP packet pileup that I am missing? > > -T > > > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 21:19:34 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 15:19:34 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: Under loaded conditions (3 machines doing httperf separately), I did a separate wget on the side, and am attaching the TCPDUMP of that request. As you can see, there is a delay in the middle where varnish didn't respond immediately. If thread/hit-rate conditions are optimal, this delay should be minimal I thought. Any help would be appreciated. -T On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > >> There?s no health check in the backend. Not sure what that does with a one >> hour grace. I set a short grace with >> >> >> >> if (req.backend.healthy) { >> >> set req.grace = 60s; >> >> } else { >> >> set req.grace = 4h; >> >> } >> > > I am still to add health-checks, directors, etc. Will add them soon. But > those make sense if the cache-primed performance is good. In my test, I am > requesting URLs who I know are already in the cache. Varnishstat also shows > that - there are no cache misses at all. > > >> >> >> You also don?t appear to select a backend in recv. >> > > The default backend seems to be getting picked up automatically. > > -T > > >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:23 PM >> >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >> wrote: >> >> >> >> What does something like firebug show when you request during the load >> test? The delay may be anything from DNS to the ec2 network. >> >> >> The DNS requests are getting resolved super quick. I am unable to see any >> other network issues with EC2. I have a similar machine in the same data >> center running nginx which is doing similar loads, but with no caching >> requirement, and it's running fine. >> >> In my first post, I forgot to attach my VCL, which is a bit too minimal. >> Am I missing something obvious? >> >> ------ >> backend default0 { >> .host = "10.202.30.39"; >> .port = "8000"; >> } >> >> sub vcl_recv { >> unset req.http.Cookie; >> set req.grace = 3600s; >> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >> } >> >> sub vcl_deliver { >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> } else { >> set resp.http.X-Cache = "MISS"; >> } >> } >> ------------------------- >> >> Could there be some kind of TCP packet pileup that I am missing? >> >> -T >> >> >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:09 AM >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> Hi Everyone, >> >> I am seeing a situation similar to : >> >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >> >> I have httperf loading a varnish cache with never-expire content. While >> the load is on, other browser/wget requests to the varnish server get >> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >> to be impacted. So, is it some kind of thread problem? >> >> In production, I see a similar situation with around 1000 req/second load. >> >> >> I am running varnishd with the following command line options (as per >> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -t 31536000 >> >> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >> >> My network sysctl parameters are tuned according to: >> http://varnish-cache.org/trac/wiki/Performance >> fs.file-max = 360000 >> net.ipv4.ip_local_port_range = 1024 65536 >> net.core.rmem_max = 16777216 >> net.core.wmem_max = 16777216 >> net.ipv4.tcp_rmem = 4096 87380 16777216 >> net.ipv4.tcp_wmem = 4096 65536 16777216 >> net.ipv4.tcp_fin_timeout = 3 >> net.core.netdev_max_backlog = 30000 >> net.ipv4.tcp_no_metrics_save = 1 >> net.core.somaxconn = 262144 >> net.ipv4.tcp_syncookies = 0 >> net.ipv4.tcp_max_orphans = 262144 >> net.ipv4.tcp_max_syn_backlog = 262144 >> net.ipv4.tcp_synack_retries = 2 >> net.ipv4.tcp_syn_retries = 2 >> >> >> Any help would be greatly appreciated >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 20:15:46.896200 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [S], seq 975218147, win 5840, options [mss 1460,sackOK,TS val 239507633 ecr 0,nop,wscale 6], length 0 20:15:46.896220 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [S.], seq 2642556500, ack 975218148, win 5792, options [mss 1460,sackOK,TS val 267323553 ecr 239507633,nop,wscale 9], length 0 20:15:46.932874 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 0 20:15:46.932900 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [P.], seq 1:341, ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 340 20:15:46.933404 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], ack 341, win 14, options [nop,nop,TS val 267323556 ecr 239507639], length 0 20:16:07.129730 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 1:2897, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 2896 20:16:07.129752 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 2897:4345, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 1448 20:16:07.138422 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1449, win 137, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138439 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 4345:5793, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 1448 20:16:07.138446 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [P.], seq 5793:5998, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 205 20:16:07.138450 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 2897, win 182, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138456 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 4345, win 227, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.148340 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5793, win 273, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148350 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148353 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [F.], seq 341, ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148441 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [F.], seq 5998, ack 342, win 14, options [nop,nop,TS val 267325578 ecr 239512699], length 0 20:16:07.156951 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5999, win 318, options [nop,nop,TS val 239512702 ecr 267325578], length 0 From nadahalli at gmail.com Fri Mar 4 22:01:42 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 16:01:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: According to http://www.spinics.net/lists/linux-net/msg17545.html - it might be due to "Overflowing the listen() command's incoming connection backlog." I simulated my load again, and here're the listen status before and during the test. Before: 3689345 times the listen queue of a socket overflowed 3689345 SYNs to LISTEN sockets dropped During: 3690354 times the listen queue of a socket overflowed 3690354 SYNs to LISTEN sockets dropped My net.core.somaxconn = 262144, which is pretty high. So, I cannot see what else I can do to increase the backlog's length. Is the only way to add more Varnish servers and load balance them behind Nginx or some such? -T On Fri, Mar 4, 2011 at 3:19 PM, Tejaswi Nadahalli wrote: > Under loaded conditions (3 machines doing httperf separately), I did a > separate wget on the side, and am attaching the TCPDUMP of that request. As > you can see, there is a delay in the middle where varnish didn't respond > immediately. If thread/hit-rate conditions are optimal, this delay should be > minimal I thought. > > Any help would be appreciated. > > -T > > > On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > >> On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: >> >>> There?s no health check in the backend. Not sure what that does with a >>> one hour grace. I set a short grace with >>> >>> >>> >>> if (req.backend.healthy) { >>> >>> set req.grace = 60s; >>> >>> } else { >>> >>> set req.grace = 4h; >>> >>> } >>> >> >> I am still to add health-checks, directors, etc. Will add them soon. But >> those make sense if the cache-primed performance is good. In my test, I am >> requesting URLs who I know are already in the cache. Varnishstat also shows >> that - there are no cache misses at all. >> >> >>> >>> >>> You also don?t appear to select a backend in recv. >>> >> >> The default backend seems to be getting picked up automatically. >> >> -T >> >> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:23 PM >>> >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >>> wrote: >>> >>> >>> >>> What does something like firebug show when you request during the load >>> test? The delay may be anything from DNS to the ec2 network. >>> >>> >>> The DNS requests are getting resolved super quick. I am unable to see any >>> other network issues with EC2. I have a similar machine in the same data >>> center running nginx which is doing similar loads, but with no caching >>> requirement, and it's running fine. >>> >>> In my first post, I forgot to attach my VCL, which is a bit too minimal. >>> Am I missing something obvious? >>> >>> ------ >>> backend default0 { >>> .host = "10.202.30.39"; >>> .port = "8000"; >>> } >>> >>> sub vcl_recv { >>> unset req.http.Cookie; >>> set req.grace = 3600s; >>> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >>> } >>> >>> sub vcl_deliver { >>> if (obj.hits > 0) { >>> set resp.http.X-Cache = "HIT"; >>> } else { >>> set resp.http.X-Cache = "MISS"; >>> } >>> } >>> ------------------------- >>> >>> Could there be some kind of TCP packet pileup that I am missing? >>> >>> -T >>> >>> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:09 AM >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> Hi Everyone, >>> >>> I am seeing a situation similar to : >>> >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >>> >>> I have httperf loading a varnish cache with never-expire content. While >>> the load is on, other browser/wget requests to the varnish server get >>> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >>> to be impacted. So, is it some kind of thread problem? >>> >>> In production, I see a similar situation with around 1000 req/second >>> load. >>> >>> I am running varnishd with the following command line options (as per >>> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -t 31536000 >>> >>> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >>> >>> My network sysctl parameters are tuned according to: >>> http://varnish-cache.org/trac/wiki/Performance >>> fs.file-max = 360000 >>> net.ipv4.ip_local_port_range = 1024 65536 >>> net.core.rmem_max = 16777216 >>> net.core.wmem_max = 16777216 >>> net.ipv4.tcp_rmem = 4096 87380 16777216 >>> net.ipv4.tcp_wmem = 4096 65536 16777216 >>> net.ipv4.tcp_fin_timeout = 3 >>> net.core.netdev_max_backlog = 30000 >>> net.ipv4.tcp_no_metrics_save = 1 >>> net.core.somaxconn = 262144 >>> net.ipv4.tcp_syncookies = 0 >>> net.ipv4.tcp_max_orphans = 262144 >>> net.ipv4.tcp_max_syn_backlog = 262144 >>> net.ipv4.tcp_synack_retries = 2 >>> net.ipv4.tcp_syn_retries = 2 >>> >>> >>> Any help would be greatly appreciated >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drais at icantclick.org Fri Mar 4 23:48:31 2011 From: drais at icantclick.org (david raistrick) Date: Fri, 4 Mar 2011 17:48:31 -0500 (EST) Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > Is the only way to add more Varnish servers and load balance them behind > Nginx or some such? Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a limiting factor if all traffic only goes through that path. I haven't followed the rest of the thread to know where your real bottleneck is, but just keep that in mind. ;) Your next alternatives (this looks like you're @ AWS) would be ELB in front of varnish (which I do, but with mixed success), or a GSLB (dns based loadbalancing) service in the DNS adding an additional level of seperation. (we use akadns and I have lots of praises and no complaints yet. :) -- david raistrick http://www.netmeister.org/news/learn2quote.html drais at icantclick.org http://www.expita.com/nomime.html From nadahalli at gmail.com Sat Mar 5 01:39:56 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 19:39:56 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: I added an Nginx server in front of the varnish cache, and things are swimming just fine now. Does it have something to do with accepting requests from different hosts? Where Nginx does better out of the box than Varnish does? -T On Fri, Mar 4, 2011 at 5:48 PM, david raistrick wrote: > On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > > Is the only way to add more Varnish servers and load balance them behind >> Nginx or some such? >> > > Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a > limiting factor if all traffic only goes through that path. > > I haven't followed the rest of the thread to know where your real > bottleneck is, but just keep that in mind. ;) > > Your next alternatives (this looks like you're @ AWS) would be ELB in front > of varnish (which I do, but with mixed success), or a GSLB (dns based > loadbalancing) service in the DNS adding an additional level of seperation. > (we use akadns and I have lots of praises and no complaints yet. :) > > > > > -- > david raistrick http://www.netmeister.org/news/learn2quote.html > drais at icantclick.org http://www.expita.com/nomime.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Sat Mar 5 10:48:20 2011 From: ronan at iol.ie (Ronan Mullally) Date: Sat, 5 Mar 2011 09:48:20 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Message-ID: Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 From brice at digome.com Sat Mar 5 20:52:54 2011 From: brice at digome.com (Brice Burgess) Date: Sat, 05 Mar 2011 13:52:54 -0600 Subject: varnishncsa & VirtualHost Message-ID: <4D729496.7010301@digome.com> I was previously running a SVN build of Varnish 2.1.4 which included fixes for timeouts with Content-Length. At the time there was no 2.1.5 debian package. I also applied the "-v virtualhost patch" [ticket 485] to varnishncsa to support virtualhost logging (as this is a multi-website webserver). Yesterday we updated to Debian Squeeze and I figured it a good time to switch back to official varnish-cache.org debs. We are now running varnish 2.1.5 but to my dismay I cannot get VirtualHost logging in varnishncsa? Apparently the logformat (-F) switch did not make it into this release?? This was a bad presumption. Are there any current solutions for getting virtualhost logging to work? Are there any unofficial .debs supporting the -F or -v options for varnishncsa? Many thanks, ~ Brice From mattias at nucleus.be Sun Mar 6 22:05:05 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Sun, 6 Mar 2011 22:05:05 +0100 Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Hi Ronan, Not sure if you've managed to test this yet, but Google seem to run with "Accept-Encoding: gzip". Perhaps there's a problem serving the compressed version, whereas your manual wget's don't use this accept-encoding? Regards, Mattias -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: zaterdag 5 maart 2011 10:48 To: varnish-misc at varnish-cache.org Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Sun Mar 6 23:39:41 2011 From: straightflush at gmail.com (AD) Date: Sun, 6 Mar 2011 17:39:41 -0500 Subject: Lots of configs Message-ID: Hello, what is the best way to run an instance of varnish that may need different vcl configurations for each hostname. This could end up being 100-500 includes to map to each hostname and then a long if/then block based on the hostname. Is there a more scalable way to deal with this? We have been toying with running one large varnish instance with tons of includes or possibly running multiple instances of varnish (with the config broken up) or spreading the load across different clusters (kind of like sharding) based on hostname to keep the configuration simple. Any best practices here? Are there any notes on the performance impact of the size of the VCL or the amount of if/then statements in vcl_recv to process a unique call function ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 17:50:33 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 13:50:33 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D711859.2010101@gmail.com> Hello everyone. I am using varnish to cache my Plone site, with xdv. I have the following configuration: nginx - varnish - nginx (apply xdv transf) - haproxy - plone. My problem is that the first time I open a page, my esi includes are not interpreted.. I get a blank content, and in firebug I can see the esi statement. (If I ask firefox to show me the source, it makes a new request, so the source displayed has the correct replacements). If I reload the page, or open it in a new tab everything works perfectly. The problem is only the first time a browser open the pages. If I close and reopen the browser, the first time the page is opened, the error appears again.. My varnish.vcl config: # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } Thanks for all, Rafael From neltnerb at MIT.EDU Sat Mar 5 04:33:05 2011 From: neltnerb at MIT.EDU (Brian Neltner) Date: Fri, 04 Mar 2011 20:33:05 -0700 Subject: Hosting multiple virtualhosts in apache2 Message-ID: <1299295985.23065.22.camel@zeeman> Dear Varnish, I'll preface this with saying that I am not an IT person, and so although I think I sort of get the gist of how this all works, if I don't have fairly explicit instructions on how things work I get very confused. That said, I have a slicehost server hosting http://saikoled.com which has varnish as a frontend. Varnish listens on port 80, and apache2 listens on port 8080 for ServerName www.saikoled.com with ServerAliases for saikoled.com, saikoled.net, and www.saikoled.net. What I want to do is have the slice host a different website from the same IP address, microreactorsolutions.com. I *think* that I know how to set apache2 up with a virtualhost for this, and my thought was to tell it that the virtualhost should listen on port 8079 instead of 8080 (although maybe this isn't necessary). To try to do this, I looked at the documentation for Advanced Backend Documentation here (http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html). However, the application they're looking at here is sufficiently different from what I want to do (although frustratingly close), that I can't tell what to do. It seems that this is setup to have a subdirectory that matches the regexp "^/java/" go to the other port on the backend, which is all well and good, but this doesn't seem to be something that is likely to work with a totally different ServerName (after all, the ^ suggests pretty strongly that the matching doesn't begin until after the ServerName). I also saw in the "Health Checks" some stuff that looked like it did in fact do some stuff with actual ServerNames, but I really don't get how to tell Varnish where to pull requests on port 80 from which as far as I can see is done with regexps that don't handle what I'm looking for. Sorry if this is covered somewhere more obscure in the manual, but as I said, I'm really not particularly good with computers despite the mit email address (I do chemistry...), and trying to work through this entire manual in detail is going to drive me crazy. Best, Brian Neltner From david at firechaser.com Mon Mar 7 09:33:25 2011 From: david at firechaser.com (David Murphy) Date: Mon, 7 Mar 2011 08:33:25 +0000 Subject: Hosting multiple virtualhosts in apache2 In-Reply-To: <1299295985.23065.22.camel@zeeman> References: <1299295985.23065.22.camel@zeeman> Message-ID: Hi Brian Unless the second site is doing something unusual, I don't think you need worry about having its virtualhost listen on another port. Just have all of your websites configured to run on port 8080 and then any site-specific rules (such as which pages/assets can be cached) can be added to the VCL file. We have a server that has a Varnish front end and about 6 or 7 very different websites running under Apache (port 8080) on the backend, all with their own unique domain names. For the most part all sites share the same rules e.g. such as 'always cache images' and 'never cache .php' but a couple of sites need to be treated different e.g. 'do not cache anything in the /blah directory of site abc' and we add that rule to the VCL file. Best, David On Sat, Mar 5, 2011 at 3:33 AM, Brian Neltner wrote: > Dear Varnish, > > I'll preface this with saying that I am not an IT person, and so > although I think I sort of get the gist of how this all works, if I > don't have fairly explicit instructions on how things work I get very > confused. > > That said, I have a slicehost server hosting http://saikoled.com which > has varnish as a frontend. Varnish listens on port 80, and apache2 > listens on port 8080 for ServerName www.saikoled.com with ServerAliases > for saikoled.com, saikoled.net, and www.saikoled.net. > > What I want to do is have the slice host a different website from the > same IP address, microreactorsolutions.com. > > I *think* that I know how to set apache2 up with a virtualhost for this, > and my thought was to tell it that the virtualhost should listen on port > 8079 instead of 8080 (although maybe this isn't necessary). > > To try to do this, I looked at the documentation for Advanced Backend > Documentation here > ( > http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html > ). > > However, the application they're looking at here is sufficiently > different from what I want to do (although frustratingly close), that I > can't tell what to do. It seems that this is setup to have a > subdirectory that matches the regexp "^/java/" go to the other port on > the backend, which is all well and good, but this doesn't seem to be > something that is likely to work with a totally different ServerName > (after all, the ^ suggests pretty strongly that the matching doesn't > begin until after the ServerName). > > I also saw in the "Health Checks" some stuff that looked like it did in > fact do some stuff with actual ServerNames, but I really don't get how > to tell Varnish where to pull requests on port 80 from which as far as I > can see is done with regexps that don't handle what I'm looking for. > > Sorry if this is covered somewhere more obscure in the manual, but as I > said, I'm really not particularly good with computers despite the mit > email address (I do chemistry...), and trying to work through this > entire manual in detail is going to drive me crazy. > > Best, > Brian Neltner > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 14:02:27 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 08:02:27 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D74D763.706@sbgnet.com> The best way would be to use a jump table. By that, I mean to make multiple subroutines in C, and then to jump to the different subroutines by looking up pointers to the subroutines using a string hashing/lookup system. You would also need a flag to indicate whether the hash has been 'initialized' yet as well. The initialization would consist of storing function pointers at the hash locations corresponding to each of the domains. I attempted to do this myself when I first started using varnish, but I was having problems with varnish crashing when attempting to use the code I wrote in C. There may be limitations to the C code that can be used. On 3/6/2011 5:39 PM, AD wrote: > Hello, > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long if/then > block based on the hostname. Is there a more scalable way to deal > with this? We have been toying with running one large varnish > instance with tons of includes or possibly running multiple instances > of varnish (with the config broken up) or spreading the load across > different clusters (kind of like sharding) based on hostname to keep > the configuration simple. > > Any best practices here? Are there any notes on the performance > impact of the size of the VCL or the amount of if/then statements in > vcl_recv to process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 15:23:54 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 09:23:54 -0500 Subject: Lots of configs In-Reply-To: <4D74D763.706@sbgnet.com> References: <4D74D763.706@sbgnet.com> Message-ID: but dont all the configs need to be loaded at runtime, not sure the overhead here? I think what you mentioned seems like a really innovative way to "call" the function but what about anyimpact to "loading" all these configs? If i understand what you are saying, i put a "call test_func;" in vcl_recv which turned into this in C if (VGC_function_test_func(sp)) return (1); if Are you suggesting your hash_table would take over this step ? Adam On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to jump to the > different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at the hash > locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, but I was > having problems with varnish > crashing when attempting to use the code I wrote in C. There may be > limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: > > Hello, > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? We have been > toying with running one large varnish instance with tons of includes or > possibly running multiple instances of varnish (with the config broken up) > or spreading the load across different clusters (kind of like sharding) > based on hostname to keep the configuration simple. > > Any best practices here? Are there any notes on the performance impact > of the size of the VCL or the amount of if/then statements in vcl_recv to > process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 15:45:41 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 09:45:41 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> Message-ID: <4D74EF95.7090907@sbgnet.com> vcl configuration is turned straight into C first of all. You can put your own C code in both the functions and globally. When including headers/libraries, you essentially just have to include the code globally. I am not sure if there is any 'init' function when varnish is called, so I was suggesting that the hash be initiated by just checking if the hash has been created yet. This will cause a penalty to the first vcl_recv call that goes through; but that shouldn't matter. Note that I just passed a dummy number as an example to the custom config, and that I didn't show how to do anything in the custom function. In this example, all custom stuff would be in straight C. You would need to use varnish itself to compile what config you want and look at the C code it generates to figure out how to tie in all your custom configs.... eg: C{ #include "hash.c" // a string hashing store/lookup libary; you'll need to write one // or possibly just use some freely available one. hashc *hash=0; void init_hash() { if( hash ) return; hash.store( 'test.com', &test_com ); // same for all domains } void test_com( int n ) { // custom vcl_recv stuff for domain 'test' } } sub vcl_recv { C{ char *domain; // [ place some code to fetch domain and put it in domain here ] if( !hash ) init_hash(); void (*func)(int); func = hash.lookup( domain ); func(1); } } On 3/7/2011 9:23 AM, AD wrote: > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really > innovative way to "call" the function but what about anyimpact to > "loading" all these configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski > wrote: > > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to > jump to the different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at > the hash locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, > but I was having problems with varnish > crashing when attempting to use the code I wrote in C. There may > be limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: >> Hello, >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? We have been toying with running one >> large varnish instance with tons of includes or possibly running >> multiple instances of varnish (with the config broken up) or >> spreading the load across different clusters (kind of like >> sharding) based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance >> impact of the size of the VCL or the amount of if/then statements >> in vcl_recv to process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 15:58:08 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 14:58:08 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup Message-ID: I am looking into supporting Google?s ?First Click Free for Web Search?. I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Mon Mar 7 16:05:08 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Mon, 7 Mar 2011 16:05:08 +0100 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Hi, I would look at the user agent to verify if it's a GoogleBot or not, as that's more easily checked via VCL. All GoogleBots also adhere to the correct User-Agent. There really aren't that many users that spoof their User-Agent to gain extra access. Also keep in mind that serving GoogleBot different content than actual users will get you penalties in SEO, eventually dropping your Google ranking. Just, FYI. Regards, Mattias From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, Richard Sent: maandag 7 maart 2011 15:58 To: varnish-misc at varnish-cache.org Subject: Let GoogleBot Crawl full content, reverse DNS lookup I am looking into supporting Google's "First Click Free for Web Search". I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard From richard.chiswell at mangahigh.com Mon Mar 7 16:08:03 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 07 Mar 2011 15:08:03 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <4D74F4D3.6040008@mangahigh.com> On 07/03/2011 14:58, Lane, Richard wrote: > > I am looking into supporting Google?s ?First Click Free for Web > Search?. I need to allow the GoogleBots to index the full content of > my sites but still maintain the Registration wall for everyone else. > Google suggests that you detect there GoogleBots by reverse DNS lookup > of the requesters IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? I believe this /could/ be done using a C function, but it's not something I've had experience of before. What you could do is detect the Google user-agent in varnish, and then pass that and the IP to a backend script with the original request: such as /* Varnish 2.0.6 psuedo code - may need updating */ if (req.http.user-agent == "Googlebot") { set.http.x-varnish-originalurl = req.url; set req.url = "/googlecheck?ip= " client.ip "&originalurl=" req.url; lookup; } and the Googlecheck script actually does the rDNS look up and if it matches, it returns the contents of the requested url. Richard Chiswell http://www.mangahigh.com (Speaking personally yadda yadda) -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 16:30:22 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 10:30:22 -0500 Subject: Lots of configs In-Reply-To: <4D74EF95.7090907@sbgnet.com> References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: Cool, as for the startup, i wonder if you can instead of trying to insert into VCL_Init, try to do just, as part of the startup process hit a special URL to load the hash_Table. Or another possibility might be to load an external module, and in there, populate the hash. On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski wrote: > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to include the > code globally. > > I am not sure if there is any 'init' function when varnish is called, so I > was suggesting that > the hash be initiated by just checking if the hash has been created yet. > > This will cause a penalty to the first vcl_recv call that goes through; but > that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom config, > and that > I didn't show how to do anything in the custom function. In this example, > all custom > stuff would be in straight C. You would need to use varnish itself to > compile what config > you want and look at the C code it generates to figure out how to tie in > all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; you'll need to > write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > hash.store( 'test.com', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: > > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really innovative > way to "call" the function but what about anyimpact to "loading" all these > configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to the >> different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. >> >> You would also need a flag to indicate whether the hash has been >> 'initialized' yet as well. >> The initialization would consist of storing function pointers at the hash >> locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using varnish, but I >> was having problems with varnish >> crashing when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >> >> Hello, >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? We have >> been toying with running one large varnish instance with tons of includes or >> possibly running multiple instances of varnish (with the config broken up) >> or spreading the load across different clusters (kind of like sharding) >> based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance impact >> of the size of the VCL or the amount of if/then statements in vcl_recv to >> process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 16:56:20 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 10:56:20 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: <4D750024.1060400@sbgnet.com> It is true that there are potentially better places to setup the hash, but it is best to check for a null pointer for the hash object anyway any time you use it. The setup itself is also very fast; you just don't want to do it every time. Note in my init function I forgot a 'hash = new hashc()'. Also; if you are going to do this, you will likely have a preset list of domains you are using. In such a case, the best type of hash to use would be a 'minimal perfect hash'. You could use the 'gperf' library to generate a suitable algorithm to map your domain strings into an array. On 3/7/2011 10:30 AM, AD wrote: > Cool, as for the startup, i wonder if you can instead of trying to > insert into VCL_Init, try to do just, as part of the startup process > hit a special URL to load the hash_Table. Or another possibility > might be to load an external module, and in there, populate the hash. > > > > On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski > wrote: > > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to > include the code globally. > > I am not sure if there is any 'init' function when varnish is > called, so I was suggesting that > the hash be initiated by just checking if the hash has been > created yet. > > This will cause a penalty to the first vcl_recv call that goes > through; but that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom > config, and that > I didn't show how to do anything in the custom function. In this > example, all custom > stuff would be in straight C. You would need to use varnish itself > to compile what config > you want and look at the C code it generates to figure out how to > tie in all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; > you'll need to write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > > hash.store( 'test.com ', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: >> but dont all the configs need to be loaded at runtime, not sure >> the overhead here? I think what you mentioned seems like a >> really innovative way to "call" the function but what about >> anyimpact to "loading" all these configs? >> >> If i understand what you are saying, i put a "call test_func;" in >> vcl_recv which turned into this in C >> >> if (VGC_function_test_func(sp)) >> return (1); >> if >> >> Are you suggesting your hash_table would take over this step ? >> >> Adam >> >> On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski >> > wrote: >> >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then >> to jump to the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup >> system. >> >> You would also need a flag to indicate whether the hash has >> been 'initialized' yet as well. >> The initialization would consist of storing function pointers >> at the hash locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish >> crashing when attempting to use the code I wrote in C. There >> may be limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >>> Hello, >>> what is the best way to run an instance of varnish that may >>> need different vcl configurations for each hostname. This >>> could end up being 100-500 includes to map to each hostname >>> and then a long if/then block based on the hostname. Is >>> there a more scalable way to deal with this? We have been >>> toying with running one large varnish instance with tons of >>> includes or possibly running multiple instances of varnish >>> (with the config broken up) or spreading the load across >>> different clusters (kind of like sharding) based on hostname >>> to keep the configuration simple. >>> >>> Any best practices here? Are there any notes on the >>> performance impact of the size of the VCL or the amount of >>> if/then statements in vcl_recv to process a unique call >>> function ? >>> >>> Thanks >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Mon Mar 7 16:56:09 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 07:56:09 -0800 Subject: Weird "^" in the regex of varnish Message-ID: Hi Guys I encountered this issue in two different environment(env1 and env2). The sample code is like: in vcl_fetch() else if (req.url ~ "^/tables/\w{6}/summary.js") { if (req.http.Set-Cookie !~ " u=\w") { unset beresp.http.Set-Cookie; set beresp.ttl = 2h; set beresp.grace = 22h; return(deliver); } else { return(pass); } } In env1, the request like http://mytest.com/api/v2/tables/vyulrh/read.jsamlcan enter lookup and then enter fetch to create a new cache entry. Next time, the same request will hit cache and do not do fetch anymore In env2, the same request enter and go into vcl_fetch, the regex will fail and can not enter deliver, so the resp will be sent to end user without cache creating. I'm not sure if there is somebody has the same issue. Is it platform related ? R -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 17:51:49 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 16:51:49 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Message-ID: Mattias, I am aware of Google's policy about serving different content to search users, which is why I am have to implement their "First Click Free" program. I will use the User-Agent but need to go a step further and verify the crawler is who they say they are by DNS. Cheers, Richard On 3/7/11 9:05 AM, "Mattias Geniar" wrote: > Hi, > > I would look at the user agent to verify if it's a GoogleBot or not, as > that's more easily checked via VCL. All GoogleBots also adhere to the > correct User-Agent. > There really aren't that many users that spoof their User-Agent to gain > extra access. > > Also keep in mind that serving GoogleBot different content than actual > users will get you penalties in SEO, eventually dropping your Google > ranking. Just, FYI. > > Regards, > Mattias > > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, > Richard > Sent: maandag 7 maart 2011 15:58 > To: varnish-misc at varnish-cache.org > Subject: Let GoogleBot Crawl full content, reverse DNS lookup > > > I am looking into supporting Google's "First Click Free for Web Search". > I need to allow the GoogleBots to index the full content of my sites but > still maintain the Registration wall for everyone else. Google suggests > that you detect there GoogleBots by reverse DNS lookup of the requesters > IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? > > System Desc: > Varnish 2.1.4 > RHEL 5-4 > Apache 2.2x > > - Richard From perbu at varnish-software.com Mon Mar 7 19:35:36 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 19:35:36 +0100 Subject: Lots of configs In-Reply-To: References: Message-ID: Hi, On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? > CPU and memory bandwidth is abundant on modern servers. I'm actually not sure that having a 500 entries long if/else statement will hamper performance at all. Remember, there will be no system calls. I would guess a modern server will execute at least a four million regex-based if/else per second per CPU core if most of the code and data will be in the on die cache. So executing 500 matches should take about 0.5ms. It might not make sense to optimize this. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 19:52:22 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 13:52:22 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D752966.7000203@sbgnet.com> A modern CPU can run, at most, around 10 million -assembly based- instructions per second. See http://en.wikipedia.org/wiki/Instructions_per_second A regular expression compare is likely at least 20 or so assembly instructions. That gives around 500,000 regular expression compares if you are using 100% of the CPU just for that. A reasonable amount of CPU to consume would be 30% ( at most ). So; you are left with around 150k regular expression checks per second. Lets suppose there are 500 different domains. On average, you will be doing 250 if/else checks per call. 150k / 250 = 600. That means that you will get, under fair conditions, a max of about 600 hits per second. The person asking the question likely has 500 domains running. That gives a little over 1 hit possible per second per domain. Do you think that is an acceptable solution for this person? I think not. Compare it to a hash lookup. A hash lookup, using a good minimal perfect hashing algorithms, will take at most around 10 operations. Using the same math as above, that gives around 300k lookups per second. A hash would be roughly 500 times faster than using if/else... On 3/7/2011 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD > wrote: > > > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long > if/then block based on the hostname. Is there a more scalable way > to deal with this? > > > CPU and memory bandwidth is abundant on modern servers. I'm actually > not sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would > guess a modern server will execute at least a four million regex-based > if/else per second per CPU core if most of the code and data will be > in the on die cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 19:55:09 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 13:55:09 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: Thanks Per. I guess the other part of this was to make the config more scalable so we are not constantly adding if/else blocks. Would by nice to have a way to just do something like call(custom_ + req.hostname) On Mon, Mar 7, 2011 at 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Mon Mar 7 20:45:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Mon, 7 Mar 2011 19:45:43 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> References: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Message-ID: Hi Mattias, On Sun, 6 Mar 2011, Mattias Geniar wrote: > Not sure if you've managed to test this yet, but Google seem to run with > "Accept-Encoding: gzip". Perhaps there's a problem serving the > compressed version, whereas your manual wget's don't use this > accept-encoding? You're spot on. Adding an Accept-Encoding header to my wget requests resulted in failures. The content length reported being longer than that actually retrieved. I tracked the fault down to PHP doing compression via zlib.compression. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: zaterdag 5 maart 2011 10:48 > To: varnish-misc at varnish-cache.org > Subject: Varnish returning 503s for Googlebot requests (Bug #813?) > > Hi, > > I'm a varnish noob. I've only just started rolling out a cache in front > of a VBulletin site running Apache that is currently using pound for > load > balancing. > > I'm running 2.1.5 on a debian lenny box. Testing is going well, apart > from one problem. The site runs VBSEO to generate sitemap files. > Without excpetion, every time Googlebot tries to request these files > Varnish returns a 503: > > 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET > http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" > "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" > > I can request these files via wget direct from the backend as well as > direct from varnish without a problem: > > --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz > > HTTP request sent, awaiting response... > HTTP/1.1 200 OK > Server: Apache > Content-Type: application/x-gzip > Content-Length: 130283 > Date: Sat, 05 Mar 2011 09:23:38 GMT > X-Varnish: 1282440127 > Age: 0 > Via: 1.1 varnish > Connection: keep-alive > Length: 130283 (127K) [application/x-gzip] > Saving to: `/dev/null' > > 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] > > I've reverted back to default.vcl, the only changes being to define my > own > backends. Varnishlog output is below. Having googled a bit the only > thing I've found is bug #813, but that was apparently fixed prior to > 2.1.5. Am I missing something obvious? > > > -Ronan > > > Varnishlog output > > 18 ReqStart c 66.249.66.246 63009 1282436348 > 18 RxRequest c GET > 18 RxURL c /sitemap_362.xml.gz > 18 RxProtocol c HTTP/1.1 > 18 RxHeader c Host: www.sitename.net > 18 RxHeader c Connection: Keep-alive > 18 RxHeader c Accept: */* > 18 RxHeader c From: googlebot(at)googlebot.com > 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 18 RxHeader c Accept-Encoding: gzip,deflate > 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT > 18 VCL_call c recv > 18 VCL_return c lookup > 18 VCL_call c hash > 18 VCL_return c hash > 18 VCL_call c miss > 18 VCL_return c fetch > 18 Backend c 40 sitename sitename1 > 40 TxRequest b GET > 40 TxURL b /sitemap_362.xml.gz > 40 TxProtocol b HTTP/1.1 > 40 TxHeader b Host: www.sitename.net > 40 TxHeader b Accept: */* > 40 TxHeader b From: googlebot(at)googlebot.com > 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 40 TxHeader b Accept-Encoding: gzip,deflate > 40 TxHeader b X-Forwarded-For: 66.249.66.246 > 40 TxHeader b X-Varnish: 1282436348 > 40 RxProtocol b HTTP/1.1 > 40 RxStatus b 200 > 40 RxResponse b OK > 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT > 40 RxHeader b Server: Apache > 40 RxHeader b Content-Length: 130327 > 40 RxHeader b Content-Encoding: gzip > 40 RxHeader b Vary: Accept-Encoding > 40 RxHeader b Content-Type: application/x-gzip > 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 > 18 VCL_call c fetch > 18 VCL_return c deliver > 18 ObjProtocol c HTTP/1.1 > 18 ObjStatus c 200 > 18 ObjResponse c OK > 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT > 18 ObjHeader c Server: Apache > 18 ObjHeader c Content-Encoding: gzip > 18 ObjHeader c Vary: Accept-Encoding > 18 ObjHeader c Content-Type: application/x-gzip > 18 FetchError c straight read_error: 0 > 40 Fetch_Body b 4 4294967295 1 > 40 BackendClose b sitename1 > 18 VCL_call c error > 18 VCL_return c deliver > 18 VCL_call c deliver > 18 VCL_return c deliver > 18 TxProtocol c HTTP/1.1 > 18 TxStatus c 503 > 18 TxResponse c Service Unavailable > 18 TxHeader c Server: Varnish > 18 TxHeader c Retry-After: 0 > 18 TxHeader c Content-Type: text/html; charset=utf-8 > 18 TxHeader c Content-Length: 419 > 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT > 18 TxHeader c X-Varnish: 1282436348 > 18 TxHeader c Age: 1 > 18 TxHeader c Via: 1.1 varnish > 18 TxHeader c Connection: close > 18 Length c 419 > 18 ReqEnd c 1282436348 1299316657.660784483 > 1299316658.684726000 0.478523970 1.023897409 0.000044107 > 18 SessionClose c error > 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From drew.smathers at gmail.com Mon Mar 7 21:58:15 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 15:58:15 -0500 Subject: Varnish still 503ing after adding grace to VCL Message-ID: Hi all, I'm trying to grace as a means of ensuring that cached content is delivered from varnish past it's TTL if backends can't generate a response. With some experiments this does not seem to happen with our setup. After an object is cached, varnish still returns a 503 within the grace period if a backend goes down. Below are details. version: varnish-2.1.4 SVN 5447M I stripped down my VCL to the following to demonstrate: backend webapp { .host = "127.0.0.1"; .port = "8000"; } sub vcl_recv { set req.backend = webapp; set req.grace = 1h; } sub vcl_fetch { set beresp.grace = 24h; } Running varnish: varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 First request: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 200 OK Server: WSGIServer/0.1 Python/2.6.6 Vary: Authorization, Accept-Language, X-Gttv-Apikey Etag: "e9c12380818a05ed40ae7df4dad67751" Content-Type: application/json; charset=utf-8 Content-Language: en Cache-Control: max-age=30 Content-Length: 425 Date: Mon, 07 Mar 2011 16:12:56 GMT X-Varnish: 377135316 377135314 Age: 6 Via: 1.1 varnish Connection: close Wait 30 seconds, kill backend app, then make another request through varnish: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 503 Service Unavailable Server: Varnish Retry-After: 0 Content-Type: text/html; charset=utf-8 Content-Length: 418 Date: Mon, 07 Mar 2011 16:14:02 GMT X-Varnish: 377135317 Age: 0 Via: 1.1 varnish Connection: close Any help or clarification on request grace would be appreciated. Thanks, -Drew From brice at digome.com Mon Mar 7 22:05:52 2011 From: brice at digome.com (Brice Burgess) Date: Mon, 07 Mar 2011 15:05:52 -0600 Subject: varnishncsa and -F option? Message-ID: <4D7548B0.9090608@digome.com> Is there a production-ready version of varnishncsa that supports the -F switch implemented 4 months ago here: http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? The -F / LogFormat switch allows for VirtualHost support -- although appears to have missed the 2.1.5 release? Thanks, ~ Brice From perbu at varnish-software.com Mon Mar 7 22:18:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 22:18:03 +0100 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: Hi David, List. On Mon, Mar 7, 2011 at 7:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using 100% > of the > CPU just for that. A reasonable amount of CPU to consume would be 30% ( at > most ). > So; you are left with around 150k regular expression checks per second. > I guess we should stop speculating. I wrote a short program to do in-cache pcre pattern matching. My laptop (i5 M560) seems to churn through 7M pcre matches a second so I was a bit off. The matches where anchored and small but varying it doesn't seem to affect performance much. The source for my test is here: http://pastebin.com/a68y15hp (.. ) > Compare it to a hash lookup. A hash lookup, using a good minimal perfect > hashing algorithms, > will take at most around 10 operations. Using the same math as above, that > gives around 300k > lookups per second. A hash would be roughly 500 times faster than using > if/else... > Of course a hash lookup is faster. But if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it. Of course it isn't as cool a building a hash table of functions in inline C, but is it useful when the speedup gain is lost in buffer bloat anyway? I think not. Cheers, Per. > > > On 3/7/2011 1:35 PM, Per Buer wrote: > > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.shenton at nasa.gov Mon Mar 7 22:36:41 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Mon, 7 Mar 2011 15:36:41 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? Message-ID: To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. From jhayter at manta.com Mon Mar 7 22:48:37 2011 From: jhayter at manta.com (Jim Hayter) Date: Mon, 7 Mar 2011 21:48:37 +0000 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: /* determine vhost name w/out port number */ set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); set req.http.host = req.http.newhost; This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. If you require the port number at the application level, you will have to do something a bit different to preserve it. Jim -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Shenton, Chris (HQ-LM020)[INDYNE INC] Sent: Monday, March 07, 2011 4:37 PM To: varnish-misc at varnish-cache.org Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Mon Mar 7 23:34:45 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 17:34:45 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: you need to enable a probe in the backend for this to work i believe. On Mon, Mar 7, 2011 at 3:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. With some experiments this does not seem to happen with our > setup. After an object is cached, varnish still returns a 503 within > the grace period if a backend goes down. Below are details. > > version: varnish-2.1.4 SVN 5447M > > I stripped down my VCL to the following to demonstrate: > > backend webapp { > .host = "127.0.0.1"; > .port = "8000"; > } > > sub vcl_recv { > set req.backend = webapp; > set req.grace = 1h; > } > > > sub vcl_fetch { > set beresp.grace = 24h; > } > > Running varnish: > > varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 > > > First request: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 200 OK > Server: WSGIServer/0.1 Python/2.6.6 > Vary: Authorization, Accept-Language, X-Gttv-Apikey > Etag: "e9c12380818a05ed40ae7df4dad67751" > Content-Type: application/json; charset=utf-8 > Content-Language: en > Cache-Control: max-age=30 > Content-Length: 425 > Date: Mon, 07 Mar 2011 16:12:56 GMT > X-Varnish: 377135316 377135314 > Age: 6 > Via: 1.1 varnish > Connection: close > > > Wait 30 seconds, kill backend app, then make another request through > varnish: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 503 Service Unavailable > Server: Varnish > Retry-After: 0 > Content-Type: text/html; charset=utf-8 > Content-Length: 418 > Date: Mon, 07 Mar 2011 16:14:02 GMT > X-Varnish: 377135317 > Age: 0 > Via: 1.1 varnish > Connection: close > > Any help or clarification on request grace would be appreciated. > > Thanks, > -Drew > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Mar 7 23:39:40 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 23:39:40 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. That's "Saint Mode" - please see http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode I see that there isn't too much details on the semantics there. I'll see if I can add some details. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.smathers at gmail.com Mon Mar 7 23:52:44 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 17:52:44 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: > On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers > wrote: >> >> Hi all, >> >> I'm trying to grace as a means of ensuring that cached content is >> delivered from varnish past it's TTL if backends can't generate a >> response. > > That's "Saint Mode" - please > see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode > I see that there isn't too much details on the semantics there. I'll see if > I can add some details. Hi Per, I actually tried using saintmode for this problem but one point that I found tricky is that saintmode (as far as i can tell from docs) can only be set on beresp. If the backend is up, that's great because I can check a non-200 status in vcl_fetch() and set. But in the case of all backends being down, vcl_fetch() doesn't even get invoked and there isn't any other routine and object in the routine's execution context (that I know of) where I can set saintmode and restart. Thanks, -Drew From junxian.yan at gmail.com Tue Mar 8 06:22:13 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 21:22:13 -0800 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: I upgraded varnish to 2.1.5 and used log function to trace the req.url and found there was host name in 'req.url'. But I didn't find any more description about this format in wiki. So I have to do a regsub before entering every function. Dose it make sense? Below is varnish log, 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1299561539 1.0 12 SessionOpen c 10.0.2.130 56799 :6081 12 ReqStart c 10.0.2.130 56799 1589705637 12 RxRequest c GET 12 RxURL c http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 On Mon, Mar 7, 2011 at 7:56 AM, Junxian Yan wrote: > Hi Guys > > I encountered this issue in two different environment(env1 and env2). > The sample code is like: > in vcl_fetch() > > else if (req.url ~ "^/tables/\w{6}/summary.js") { > if (req.http.Set-Cookie !~ " u=\w") { > unset beresp.http.Set-Cookie; > set beresp.ttl = 2h; > set beresp.grace = 22h; > return(deliver); > } else { > return(pass); > } > } > > In env1, the request like > http://mytest.com/api/v2/tables/vyulrh/read.jsaml can enter lookup and > then enter fetch to create a new cache entry. Next time, the same request > will hit cache and do not do fetch anymore > In env2, the same request enter and go into vcl_fetch, the regex will fail > and can not enter deliver, so the resp will be sent to end user without > cache creating. > > I'm not sure if there is somebody has the same issue. Is it platform > related ? > > > R > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Tue Mar 8 07:20:47 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 08 Mar 2011 07:20:47 +0100 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: <4D75CABF.7020403@ruberg.no> On 03/08/2011 06:22 AM, Junxian Yan wrote: > I upgraded varnish to 2.1.5 and used log function to trace the req.url > and found there was host name in 'req.url'. But I didn't find any more > description about this format in wiki. > So I have to do a regsub before entering every function. Dose it make > sense? > > Below is varnish log, > > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1299561539 1.0 > 12 SessionOpen c 10.0.2.130 56799 :6081 > 12 ReqStart c 10.0.2.130 56799 1589705637 > 12 RxRequest c GET > 12 RxURL c > http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 Different User-Agents send different req.url. To normalize them, see http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizingReqUrl Note that technically, there's nothing wrong with using hostnames in req.url, apart from possibly storing the same object under different names. However, as you have found out, some regular expressions might not work as intended until you normalize req.url. -- Bj?rn From tfheen at varnish-software.com Tue Mar 8 07:55:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 07:55:24 +0100 Subject: varnishncsa and -F option? In-Reply-To: <4D7548B0.9090608@digome.com> (Brice Burgess's message of "Mon, 07 Mar 2011 15:05:52 -0600") References: <4D7548B0.9090608@digome.com> Message-ID: <8762ru3xur.fsf@qurzaw.varnish-software.com> ]] Brice Burgess | Is there a production-ready version of varnishncsa that supports the | -F | switch implemented 4 months ago here: | http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? It'll be in 3.0. | The -F / LogFormat switch allows for VirtualHost support -- although | appears to have missed the 2.1.5 release? It was never intended for or aimed at the 2.1 branch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From paul.lu81 at gmail.com Mon Mar 7 21:37:49 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Mon, 7 Mar 2011 12:37:49 -0800 Subject: A lot of if statements to handle hostnames Message-ID: Hi, I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? example pseudo code: ================================= vcl_recv(){ if(req.http.host == "www.domain1.com") { set req.backend = www_domain1_com; # more code return(lookup); } if(req.http.host == "www.domain2.com") { set req.backend = www_domain2_com; # more code return(lookup); } if(req.http.host == "www.domain3.com") { set req.backend = www_domain3_com; # more code return(lookup); } } ================================= Thank you, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Tue Mar 8 08:16:50 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 23:16:50 -0800 Subject: should beresp will be added into cache? Message-ID: Hi Guys I added some logic to change the beresp header in vcl_fetch. And I also do lookup for the same request in vcl_recv. The handling process I expected should be: the first incoming request will be changed by fetch logic and the second request should use the cache with the changed part But the actually result is the change parts are not be cached Here is my code: in vcl_fetch if (req.url ~ "/(images|javascripts|stylesheets)/") { unset beresp.http.Set-Cookie; set beresp.http.Cache-Control = "private, max-age = 3600, must-revalidate"; # 1 hour set beresp.ttl = 10m; set beresp.http.clientcache = "1"; return(deliver); } And I also wanna the response of the second request have the max-age = 3600 and clientcache = 1. The actual result is max-age = 0 and no clientcache in response Found some explanation in varnish doc lib, seems not as exactly as I expected. Is the beresp inserted into cache totally? deliverPossibly insert the object into the cache, then deliver it to the client. Control will eventually pass to vcl_deliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From indranilc at rediff-inc.com Tue Mar 8 08:32:53 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 8 Mar 2011 07:32:53 -0000 Subject: =?utf-8?B?UmU6IEEgbG90IG9mIGlmIHN0YXRlbWVudHMgdG8gaGFuZGxlIGhvc3RuYW1lcw==?= Message-ID: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Apart from improving the construct to if ... elseif , could you please tell me the reason why you are looking for a different way? Is it only for ease of writing less statements or is there some other reason you foresee? I am asking because we also have a number of similar construct in our vcl. Thanks. Thanks, Neel On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu <paul.lu81 at gmail.com> wrote >Hi, > >I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? > >example pseudo code: >================================= >vcl_recv(){ > > if(req.http.host == "www.domain1.com") > { > set req.backend = www_domain1_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain2.com") > { > set req.backend = www_domain2_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain3.com") > { > set req.backend = www_domain3_com; > # more code > return(lookup); > } >} >================================= > >Thank you, >Paul > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Mar 8 08:39:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:39:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Mon, 07 Mar 2011 08:02:27 EST." <4D74D763.706@sbgnet.com> Message-ID: <39035.1299569948@critter.freebsd.dk> In message <4D74D763.706 at sbgnet.com>, David Helkowski writes: >The best way would be to use a jump table. >By that, I mean to make multiple subroutines in C, and then to jump to >the different subroutines by looking >up pointers to the subroutines using a string hashing/lookup system. The sheer insanity of this proposal had me wondering which vending machine gave you a CS degree instead of the cola you ordered. But upon reading: >I attempted to do this myself when I first started using >varnish, but I was having problems with varnish crashing >when attempting to use the code I wrote in C. There may be >limitations to the C code that can be used. I realized that you're probably just some troll trying to have a bit of a party here on our mailing list, or possibly some teenager in his mothers basement, from where you "rulez teh w0rld" because he is quite clearly Gods Gift To Computers. Or quite likely both. The fact that you have to turn to Wikipedia to find out how many instructions a contemporary CPU can execute per second, and then get the answer wrong by about an order of magnitude makes me almost sad for you. But you may have a future in you still, but there are a lot of good books you will have read to unlock it. I would recommend you start out with "The Mythical Man Month", and continue with pretty much anything Kernighan has written on the subject of programming. At some point, you will understand what Dijkstra is talking about here: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html Until then, you should not attempt to do anything with a computer that could harm other people. And now: Please shut up before I mock you. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Tue Mar 8 08:41:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:41:26 +0000 Subject: should beresp will be added into cache? In-Reply-To: Your message of "Mon, 07 Mar 2011 23:16:50 PST." Message-ID: <39065.1299570086@critter.freebsd.dk> In message , Junx ian Yan writes: >Here is my code: >in vcl_fetch > if (req.url ~ "/(images|javascripts|stylesheets)/") { > unset beresp.http.Set-Cookie; > set beresp.http.Cache-Control = "private, max-age = 3600, >must-revalidate"; # 1 hour > set beresp.ttl = 10m; > set beresp.http.clientcache = "1"; > return(deliver); > } > >And I also wanna the response of the second request have the max-age = 3600 >and clientcache = 1. The actual result is max-age = 0 and no clientcache in >response Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From junxian.yan at gmail.com Tue Mar 8 09:23:06 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Tue, 8 Mar 2011 00:23:06 -0800 Subject: should beresp will be added into cache? In-Reply-To: <39065.1299570086@critter.freebsd.dk> References: <39065.1299570086@critter.freebsd.dk> Message-ID: Actually, I need to set clientcache in fetch. But seems varnish can not add this attribute into cache list. On Mon, Mar 7, 2011 at 11:41 PM, Poul-Henning Kamp wrote: > In message , > Junx > ian Yan writes: > > >Here is my code: > >in vcl_fetch > > if (req.url ~ "/(images|javascripts|stylesheets)/") { > > unset beresp.http.Set-Cookie; > > set beresp.http.Cache-Control = "private, max-age = 3600, > >must-revalidate"; # 1 hour > > set beresp.ttl = 10m; > > set beresp.http.clientcache = "1"; > > return(deliver); > > } > > > >And I also wanna the response of the second request have the max-age = > 3600 > >and clientcache = 1. The actual result is max-age = 0 and no clientcache > in > >response > > Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Tue Mar 8 14:03:47 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:03:47 -0500 Subject: Lots of configs In-Reply-To: <39035.1299569948@critter.freebsd.dk> References: <39035.1299569948@critter.freebsd.dk> Message-ID: <4D762933.5080905@sbgnet.com> To write this sort of message, and to the list no doubt, is nothing short of immature. In so much as what I said caused such a response, I apologize for those having bothered to read this. That said, I am going to response to the points made. I would appreciate a 3rd party ( well a 4th at this point ), who has more experience and maturity, would chip in and provide some order to this discussion. On 3/8/2011 2:39 AM, Poul-Henning Kamp wrote: > In message<4D74D763.706 at sbgnet.com>, David Helkowski writes: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to >> the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. > The sheer insanity of this proposal had me wondering which vending > machine gave you a CS degree instead of the cola you ordered. They don't teach jump tables in any college I know of. I believe I first learned about them in my own readings of 'Peter Norton's Assembly Language'; a book I first read perhaps about 15 years ago. I still have the book on the shelf. I don't think Peter Norton would ever call an ingenious solution to a challenging problem 'sheer insanity'. He would very likely laugh at the simplicity of what I am suggesting. > But upon reading: > >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish crashing >> when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be used. > I realized that you're probably just some troll trying to have > a bit of a party here on our mailing list, or possibly some teenager > in his mothers basement, from where you "rulez teh w0rld" because > he is quite clearly Gods Gift To Computers. This is called an Ad hominen attack. Belittling those you interact with in no way betters your opinion. I am also not sure why this is a response to what you quoted me on. I wrote what I did because I am actually curious if someone has time and effort to get hash tables working in VCL. I would to see a working rendition of it. I didn't really spend much time attempting to make it work, because my own usage of VCL didn't end up requiring it. That is, my statement here is an admission of my own lack of knowledge of the limitations of inline C in VCL. I am not trolling and would seriously like to see working hash tables. > Or quite likely both. > > The fact that you have to turn to Wikipedia to find out how many > instructions a contemporary CPU can execute per second, and then > get the answer wrong by about an order of magnitude makes me almost > sad for you. I will test your code and write a subroutine demonstrating the reality of the numbers I have quoted. Once I have done that I will respond to this statement. > But you may have a future in you still, but there are a lot of good > books you will have read to unlock it. > > I would recommend you start out with "The Mythical Man Month", and > continue with pretty much anything Kernighan has written on the > subject of programming. I have read many discussions on the book in question, and am quite familiar with the writing of Kernighan and Ritchie. They are well written authors on the C language. Their methodologies are also outdated. Their book a on C is over 20 years old at this point. Obviously good information doesn't expire, but a lot of good things have been learned since then. I am not interested in playing knowledge based games. Programming is not a trivia game; it is about applying workable solutions to real world problems in an efficient manner. > At some point, you will understand what Dijkstra is talking about here: > > http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html No doubt this is a well written piece that bears a response of its own. I am not going to respond to this link with any detail at the moment, because you haven't bothered to explain the purpose of putting it here; other than to link to something more well written than your own childish attack. > Until then, you should not attempt to do anything with a computer > that could harm other people. I hardly see how answering a request for the right way to do something with the appropriate correct way is something that will harm. It is up to the reader to decide what method they which to use. Also, I am concerned with your lack of confidence in other users of Varnish. I think that there are many learned users of it, and a good number of them are quite capable of taking my hash table suggestion and making it a usable reality. Once it is a reality it could easily be used by other less experienced users of Varnish. How is having an open discussion about an efficient solution to a recurring problem harmful? > And now: Please shut up before I mock you. If you wish to mock; feel free. I would prefer if you send me a direct email and do not send such nonsense to the list, nor to other uninvolved parties. > Poul-Henning > From dhelkowski at sbgnet.com Tue Mar 8 14:20:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:20:01 -0500 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: <4D762D01.3030209@sbgnet.com> First off, I would like to thank Per Buer for pointing out that I am off by a factor of 1000 in the following statements. I have corrected for that below so that my statements are more clear. My mistake was in considering modern processors as 2 megahertz instead of 2 gigahertz. On 3/7/2011 1:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. Make that 10 billion. The math I am using is 5 x 2 gigahertz. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using > 100% of the > CPU just for that. A reasonable amount of CPU to consume would be 30% > ( at most ). > So; you are left with around 150k regular expression checks per second. The correct numbers here are 500 million. A regular expression compare more likely takes 40 assembly instructions, so I am going to cut this to 250 million. LIkewise, at 30%, that leads to about 80 million. > > Lets suppose there are 500 different domains. On average, you will be > doing 250 if/else > checks per call. 150k / 250 = 600. The new number is 80 million / 250 = 320k > That means that you will get, under fair conditions, a max > of about 600 hits per second. 320,000 hits per second. Obviously, no server is capable of serving up such a number. Just using regular expressions in a cascading if/then will work fine in this case. My apologies for the confusion in this regard. What I can see is a server serving around 10,000 hits per second. That would require about 30x the number of domains. You don't really want to eat up CPU usage for just if/then though, so probably at around 10x the number of domains you'd want to switch to a hash table. So; correcting my conclusion; if you are altering configuration for 5000 domains, then you are going to need a hash table. Otherwise you are going to be fine just using a cascading if/then, despite it being ugly. > The person asking the question likely has 500 domains running. > That gives a little over 1 hit possible per second per domain. Do you > think that is an acceptable > solution for this person? I think not. > > Compare it to a hash lookup. A hash lookup, using a good minimal > perfect hashing algorithms, > will take at most around 10 operations. Using the same math as above, > that gives around 300k > lookups per second. A hash would be roughly 500 times faster than > using if/else... Note that despite my being off by a factor of 1000, the multiplication still holds out. If you use a hash table, even with only 500 domains, a hash table will -still- be 500 times faster. I still think it would be great to have a hash table solution available for use in VCL. > > On 3/7/2011 1:35 PM, Per Buer wrote: >> Hi, >> >> On Sun, Mar 6, 2011 at 11:39 PM, AD > > wrote: >> >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? >> >> >> CPU and memory bandwidth is abundant on modern servers. I'm actually >> not sure that having a 500 entries long if/else statement will hamper >> performance at all. Remember, there will be no system calls. I would >> guess a modern server will execute at least a four million >> regex-based if/else per second per CPU core if most of the code and >> data will be in the on die cache. So executing 500 matches should >> take about 0.5ms. >> >> It might not make sense to optimize this. >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Tue Mar 8 15:01:03 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 15:01:03 +0100 Subject: Lots of configs In-Reply-To: <4D762933.5080905@sbgnet.com> (David Helkowski's message of "Tue, 08 Mar 2011 08:03:47 -0500") References: <39035.1299569948@critter.freebsd.dk> <4D762933.5080905@sbgnet.com> Message-ID: <87d3m11zkw.fsf@qurzaw.varnish-software.com> ]] David Helkowski (if you could add a blank line between quoted text and your own addition that makes it much easier to read your replies) Hi, | This is called an Ad hominen attack. Belittling those you interact | with in no way betters your opinion. I am also not sure why this is a | response to what you quoted me on. I wrote what I did because I am | actually curious if someone has time and effort to get hash tables | working in VCL. I would to see a working rendition of it. I didn't | really spend much time attempting to make it work, because my own | usage of VCL didn't end up requiring it. We'll probably end up implementing hash tables in a vmod at some point, but it's not anywhere near the top of the todo list. What we've been discussing so far would probably not have been useful for your use case above, though. As for doing 3-500 regex or string matches per request that's hardly a big problem for us as Per's numbers demonstrate. cheers, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From phk at phk.freebsd.dk Tue Mar 8 15:53:20 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 14:53:20 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:27:29 +0400." Message-ID: <88534.1299596000@critter.freebsd.dk> In message , Jona than DeMello writes: >Poul simply comes across as a nervous child, throwing every superiority >imposing cliche out there because he thought a team member was >'threatened'. I received a couple of complaints about flames (on and off list) originating from David, and after reading his contribution, decided that he was not worth the bother, and decided to call his bullshit and get it over with. "Jump Tables" was a very neat concept, about 25-30 years ago, when people tried to squeeze every bit of performance out of a 4.77MHz i8088 chip in a IBM PC. They are however just GOTO in disguise and they have all the disadvantages of GOTO, without, and this is important: without _any_ benefits at all on a modern pipelined and deeply cache starved CPU. That's why I pointed David at Dijkstra epistle and other literature for building moral character as a programmer. If David had come up with a valid point or a good suggestion, then I would possibly tolerate a minimum of behavioural problems from him. But suggesting we abandon 50 years of progress towards structured programming, and use GOTOs to solve a nonexistant problem, for which there are perfectly good and sensible methods, should it materialize, just because he saw a neat trick in an old book and wanted to show of his skillz, earns him no right to flame people in this project. And that's the end of that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From chris.shenton at nasa.gov Tue Mar 8 18:11:39 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Tue, 8 Mar 2011 11:11:39 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: On Mar 7, 2011, at 4:48 PM, Jim Hayter wrote: > In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: > > /* determine vhost name w/out port number */ > set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); > set req.http.host = req.http.newhost; > > This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. This looks like it does exactly what we need. I thought I was going have to monkey with server.port, or what the vcl_hash includes in its key calculation, but this is straight-forward. Thanks a lot, Jim. From dhelkowski at sbgnet.com Tue Mar 8 20:50:57 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 14:50:57 -0500 Subject: Lots of configs In-Reply-To: <88534.1299596000@critter.freebsd.dk> References: <88534.1299596000@critter.freebsd.dk> Message-ID: <4D7688A1.8020202@sbgnet.com> On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > In message, Jona > than DeMello writes: > >> Poul simply comes across as a nervous child, throwing every superiority >> imposing cliche out there because he thought a team member was >> 'threatened'. > I received a couple of complaints about flames (on and off list) > originating from David, and after reading his contribution, decided > that he was not worth the bother, and decided to call his bullshit > and get it over with. I will admit to writing one email angrily responding to Per Buer. My anger was due primarily to the statement "if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it." The contents of that private email essentially boiled to me saying, in many more words: "not everyone is as stupid as you". Now, I agree that was distasteful, but it isn't much different than you stating you are 'calling my bullshit'. I am not quite sure why you, and others, have decided that this is a pissing match. Also; if it helps anything; I apologize for my ranting email to Per Buer. It was certainly over the line. I am sorry for going off on that. I have my reasons but I would still like to have a meaningful discussion. Per Buer, the very person I ticked off, admitted that a hash lookup is certainly faster. Other people are expressing interested in having a hash system in place with VCL. I myself am even willing to write the system. Sure I may be obnoxious at times in my presentation of what I want done, but I hardly thing it calls for your response or arrogant counter-attitude. > "Jump Tables" was a very neat concept, about 25-30 years ago, when > people tried to squeeze every bit of performance out of a 4.77MHz > i8088 chip in a IBM PC. Jump tables, and gotos, are still perfectly usable on modern system. Good techniques, in their proper place, don't expire. Hash tables for instance certainly have not been replaced by cascading 'if else' structures. Note that I am suggesting hash tables combined with jump tables. I don't see any legitimate objection to such an idea. > They are however just GOTO in disguise and they have all the > disadvantages of GOTO, without, and this is important: without _any_ > benefits at all on a modern pipelined and deeply cache starved CPU. So we should continue using cascading 'if else'? That is _very_ efficient on modern CPU architecture? ... > That's why I pointed David at Dijkstra epistle and other literature > for building moral character as a programmer. Yeah... speaking of that; I read the beginning of the article at the very least. It immediately starts talking about code elegance and the purity of solutions. If anything, it leans very heavily towards hash tables as opposed to long cascading 'if else'. > If David had come up with a valid point or a good suggestion, then > I would possibly tolerate a minimum of behavioural problems from him. How is 'can we please use hash tables' not a valid point and suggestion? > But suggesting we abandon 50 years of progress towards structured > programming, and use GOTOs to solve a nonexistant problem, for which > there are perfectly good and sensible methods, should it materialize, > just because he saw a neat trick in an old book and wanted to show > of his skillz, earns him no right to flame people in this project. Perfectly good and sensible methods such as what? 500 cascading 'if else' for each call? Are you seriously suggesting that is a technique honed to perfection in the last 50 years that is based on structured programming? I read about jump tables and hashing many many years ago. It is hardly a neat trick I recently dug out of an old book. Let me ask you this: have you heard of Bob Jenkins? Would you say his analysis of hash tables is outdated and meaningless? In regard to showing off skills; I could really care less what you or anyone else think of my coding skills. I responded to the initial question because I wanted to honestly point people towards a better solution to a recurring problem that has been mentioned in the list. Your last statement implies people can 'earn' the right to flame. ? Is that what you are doing? Using your 'earned' right to flame me? > And that's the end of that. Having the last word is something given to the victor. Arbitrarily declaring your statements to be the last word is pretty arrogant. > Poul-Henning > From drew.smathers at gmail.com Tue Mar 8 21:34:48 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 15:34:48 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:52 PM, Drew Smathers wrote: > On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: [snip] >> >> That's "Saint Mode" - please >> see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode >> I see that there isn't too much details on the semantics there. I'll see if >> I can add some details. > > Hi Per, > > I actually tried using saintmode for this problem but one point that I > found tricky is that saintmode (as far as i can tell from docs) can > only be set on beresp. If the backend is up, that's great because I > can check a non-200 status in vcl_fetch() and set. But in the case of > all backends being down, vcl_fetch() doesn't even get invoked and > there isn't any other routine and object in the routine's execution > context (that I know of) where I can set saintmode and restart. > Sorry to bump my own thread, but does anyone know of a way to set saintmode if a backend is down, vs. up and misbehaving (returning 500, etc)? Also, I added a backend probe and this indeed caused grace to kick in once the probe determined the backend as sick.I think the docs should be clarified if this isn't a bug (grace not working without probe): http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Finally it's somewhat disconcerting that in the interim between a cache expiry and before varnish determines a backend as down (sick) it will 503 - so this could affect many clients during that window. Ideally, I'd like to successfully service requests if there's an object in the cache - period - but I guess this isn't possible now with varnish? Thanks, -Drew From ronan at iol.ie Tue Mar 8 21:38:08 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 20:38:08 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs Message-ID: I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 From perbu at varnish-software.com Tue Mar 8 21:51:55 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 8 Mar 2011 21:51:55 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: Hi Drew, list. On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers wrote: > Sorry to bump my own thread, but does anyone know of a way to set > saintmode if a backend is down, vs. up and misbehaving (returning 500, > etc)? > > Also, I added a backend probe and this indeed caused grace to kick in > once the probe determined the backend as sick.I think the docs should > be clarified if this isn't a bug (grace not working without probe): > > http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Check out the trunk version of the docs. Committed some earlier today. > Finally it's somewhat disconcerting that in the interim between a > cache expiry and before varnish determines a backend as down (sick) it > will 503 - so this could affect many clients during that window. > Ideally, I'd like to successfully service requests if there's an > object in the cache - period - but I guess this isn't possible now > with varnish? > Actually it is. In the docs there is a somewhat dirty trick where set a marker in vcl_error, restart and pick up on the error and switch backend to one that is permanetly down. Grace kicks in and serves the stale content. Sometime post 3.0 there will be a refactoring of the whole vcl_error handling and we'll end up with something a bit more elegant. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Tue Mar 8 22:54:57 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 8 Mar 2011 16:54:57 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> I would look at setting a fail director. Restart if there is a 503, and if restarts > 0 select the patient director with very generous health checking. Your timeouts are reasonable, but try .timeout 20s and .threshold 1 for the patient director. Having a different view of the backends usually deals with occasional 503s. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Tue Mar 8 22:56:51 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 8 Mar 2011 13:56:51 -0800 Subject: Lots of configs In-Reply-To: <4D7688A1.8020202@sbgnet.com> References: <88534.1299596000@critter.freebsd.dk> <4D7688A1.8020202@sbgnet.com> Message-ID: An O(1) solution (e.g., a hash table) is a perfectly valid optimization of an O(N) solution. But you are confusing an O(N) solution with an O(N) problem. If the O(N) solution *in actual bona fide reality *becomes a problem for someone's use-case, I'm sure that an O(1) solution can be implemented as necessary. If *enough *someones need this O(1) solution, then it will begin to show up on this project's official radar as a potential built-in VCL feature or vmod. It's that simple. If anyone else here wants to continue pettifogging with you, please let them elect to email you directly, rather than sharing this debate with those of us who don't. It will substantiate the character and experience that you profess to have. Cheers, -- kb On Tue, Mar 8, 2011 at 11:50, David Helkowski wrote: > On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > >> In message, >> Jona >> than DeMello writes: >> >> Poul simply comes across as a nervous child, throwing every superiority >>> imposing cliche out there because he thought a team member was >>> 'threatened'. >>> >> I received a couple of complaints about flames (on and off list) >> originating from David, and after reading his contribution, decided >> that he was not worth the bother, and decided to call his bullshit >> and get it over with. >> > > I will admit to writing one email angrily responding to Per Buer. > My anger was due primarily to the statement "if you got to deploy a whole > bunch of scary inline C that will seriously intimidate the summer intern and > makes all the other fear the config it's just not worth it." > > The contents of that private email essentially boiled to me saying, in many > more words: > "not everyone is as stupid as you". > Now, I agree that was distasteful, but it isn't much different than you > stating you are > 'calling my bullshit'. > I am not quite sure why you, and others, have decided that this is a > pissing match. > > Also; if it helps anything; I apologize for my ranting email to Per Buer. > It was certainly > over the line. I am sorry for going off on that. I have my reasons but I > would still like > to have a meaningful discussion. > > Per Buer, the very person I ticked off, admitted that a hash lookup is > certainly faster. > Other people are expressing interested in having a hash system in place > with VCL. > I myself am even willing to write the system. > Sure I may be obnoxious at times in my presentation of what I want done, > but I hardly > thing it calls for your response or arrogant counter-attitude. > > "Jump Tables" was a very neat concept, about 25-30 years ago, when >> people tried to squeeze every bit of performance out of a 4.77MHz >> i8088 chip in a IBM PC. >> > > Jump tables, and gotos, are still perfectly usable on modern system. Good > techniques, > in their proper place, don't expire. Hash tables for instance certainly > have not been > replaced by cascading 'if else' structures. > > Note that I am suggesting hash tables combined with jump tables. I don't > see any > legitimate objection to such an idea. > > They are however just GOTO in disguise and they have all the >> disadvantages of GOTO, without, and this is important: without _any_ >> benefits at all on a modern pipelined and deeply cache starved CPU. >> > > So we should continue using cascading 'if else'? That is _very_ efficient > on modern > CPU architecture? ... > > That's why I pointed David at Dijkstra epistle and other literature >> for building moral character as a programmer. >> > > Yeah... speaking of that; I read the beginning of the article at the very > least. It immediately starts > talking about code elegance and the purity of solutions. If anything, it > leans very > heavily towards hash tables as opposed to long cascading 'if else'. > > If David had come up with a valid point or a good suggestion, then >> I would possibly tolerate a minimum of behavioural problems from him. >> > > How is 'can we please use hash tables' not a valid point and suggestion? > > But suggesting we abandon 50 years of progress towards structured >> programming, and use GOTOs to solve a nonexistant problem, for which >> there are perfectly good and sensible methods, should it materialize, >> just because he saw a neat trick in an old book and wanted to show >> of his skillz, earns him no right to flame people in this project. >> > Perfectly good and sensible methods such as what? 500 cascading 'if else' > for each > call? Are you seriously suggesting that is a technique honed to perfection > in the last > 50 years that is based on structured programming? > > I read about jump tables and hashing many many years ago. It is hardly a > neat trick > I recently dug out of an old book. Let me ask you this: have you heard of > Bob Jenkins? > Would you say his analysis of hash tables is outdated and meaningless? > > In regard to showing off skills; I could really care less what you or > anyone else think > of my coding skills. I responded to the initial question because I wanted > to honestly > point people towards a better solution to a recurring problem that has been > mentioned > in the list. > > Your last statement implies people can 'earn' the right to flame. ? Is that > what you are > doing? Using your 'earned' right to flame me? > > And that's the end of that. >> > > Having the last word is something given to the victor. Arbitrarily > declaring your statements > to be the last word is pretty arrogant. > >> Poul-Henning >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Tue Mar 8 23:06:41 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Tue, 8 Mar 2011 22:06:41 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, > > wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Tue Mar 8 23:11:39 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Tue, 8 Mar 2011 16:11:39 -0600 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <8371073562863633333@unknownmsgid> On 8 Mar 2011, at 16:07, Ben Dodd wrote: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, < varnish-misc-request at varnish-cache.org> wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ------------------------------ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Whilst this may not be a fix to a possible bug in varnish have you tried switching posts to pipe instead of pass? Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Tue Mar 8 23:32:29 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:32:29 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <8371073562863633333@unknownmsgid> References: <8371073562863633333@unknownmsgid> Message-ID: On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan From phk at phk.freebsd.dk Tue Mar 8 23:36:18 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 22:36:18 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 14:50:57 EST." <4D7688A1.8020202@sbgnet.com> Message-ID: <7141.1299623778@critter.freebsd.dk> In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ronan at iol.ie Tue Mar 8 23:42:04 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:42:04 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> Message-ID: Hi Stefan, On Tue, 8 Mar 2011, Caunter, Stefan wrote: > I would look at setting a fail director. Restart if there is a 503, and > if restarts > 0 select the patient director with very generous health > checking. Your timeouts are reasonable, but try .timeout 20s and > .threshold 1 for the patient director. Having a different view of the > backends usually deals with occasional 503s. Thanks for your email. Unfortunately as a varnish newbie most of it went right over my head. Are you suggesting I make changes to the health check probes to see if they will up/down backends more aggressively? I would be surprised if there are underlying health issues with the back end. The site has been running fine under everything but the heaviest of loads using pound as the front end for the past couple of years, and the backend log entries I've looked at suggest that apache is processing the POSTs fine, it's varnish that's returning the error. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 3:38 PM > To: varnish-misc at varnish-cache.org > Subject: Varnish 503ing on ~1/100 POSTs > > I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin > website. > The current load is light (up to a couple of thousand active sessions, > peak is around five thousand). Varnish has a fairly simple config with > a director consisting of two Apache backends: > > backend backend1 { > .host = "1.2.3.4"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 1.2.3.4" > "Connection: close"; > } > } > > backend backend2 { > .host = "5.6.7.8"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 5.6.7.8" > "Connection: close"; > } > } > > The numbers are modest, but significant - about 1 POST in a hundred > fails. > I've upped the backend timeouts to 90 seconds (first_byte / > between_bytes) > and I'm pretty confident they're responding in well under that time. > > varnishlog does not show any backend health changes. A typical event > looks like: > > Varnish: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 > 2623 > > Backend: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 > > The POST appears to work fine on the backend but the user gets a 503 > from > Varnish. It's not unusual to see users getting the error several times > in > a row (presumably re-submitting the post): > > a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > > A typical request is below. The first attempt fails with: > > 33 FetchError c http first read error: -1 0 (Success) > > there is presumably a restart and the second attempt (sometimes to > backend1, sometimes backend2) fails with: > > 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > > This pattern has been the same on the few transactions I've examined in > detail. The full log output of a typical request is below. > > I'm stumped. Has anybody got any ideas what might be causing this? > > > -Ronan > > > 33 RxRequest c POST > 33 RxURL c /ajax.php > 33 RxProtocol c HTTP/1.1 > 33 RxHeader c Accept: */* > 33 RxHeader c Accept-Language: nl-be > 33 RxHeader c Referer: http://www.redcafe.net/ > 33 RxHeader c x-requested-with: XMLHttpRequest > 33 RxHeader c Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 33 RxHeader c Accept-Encoding: gzip, deflate > 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) > 33 RxHeader c Host: www.sitename.net > 33 RxHeader c Content-Length: 82 > 33 RxHeader c Connection: Keep-Alive > 33 RxHeader c Cache-Control: no-cache > 33 RxHeader c Cookie: ... > 33 VCL_call c recv > 33 VCL_return c pass > 33 VCL_call c hash > 33 VCL_return c hash > 33 VCL_call c pass > 33 VCL_return c pass > 33 Backend c 44 backend backend1 > 44 TxRequest b POST > 44 TxURL b /ajax.php > 44 TxProtocol b HTTP/1.1 > 44 TxHeader b Accept: */* > 44 TxHeader b Accept-Language: nl-be > 44 TxHeader b Referer: http://www.sitename.net/ > 44 TxHeader b x-requested-with: XMLHttpRequest > 44 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 44 TxHeader b Host: www.sitename.net > 44 TxHeader b Content-Length: 82 > 44 TxHeader b Cache-Control: no-cache > 44 TxHeader b Cookie: ... > 44 TxHeader b Accept-Encoding: gzip > 44 TxHeader b X-Forwarded-For: a.b.c.d > 44 TxHeader b X-Varnish: 657185708 > * 33 FetchError c http first read error: -1 0 (Success) > 44 BackendClose b backend1 > 33 Backend c 47 backend backend2 > 47 TxRequest b POST > 47 TxURL b /ajax.php > 47 TxProtocol b HTTP/1.1 > 47 TxHeader b Accept: */* > 47 TxHeader b Accept-Language: nl-be > 47 TxHeader b Referer: http://www.sitename.net/ > 47 TxHeader b x-requested-with: XMLHttpRequest > 47 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 47 TxHeader b Host: www.sitename.net > 47 TxHeader b Content-Length: 82 > 47 TxHeader b Cache-Control: no-cache > 47 TxHeader b Cookie: ... > 47 TxHeader b Accept-Encoding: gzip > 47 TxHeader b X-Forwarded-For: a.b.c.d > 47 TxHeader b X-Varnish: 657185708 > * 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > 47 BackendClose b backend2 > 33 VCL_call c error > 33 VCL_return c deliver > 33 VCL_call c deliver > 33 VCL_return c deliver > 33 TxProtocol c HTTP/1.1 > 33 TxStatus c 503 > 33 TxResponse c Service Unavailable > 33 TxHeader c Server: Varnish > 33 TxHeader c Retry-After: 0 > 33 TxHeader c Content-Type: text/html; charset=utf-8 > 33 TxHeader c Content-Length: 2623 > 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT > 33 TxHeader c X-Varnish: 657185708 > 33 TxHeader c Age: 3 > 33 TxHeader c Via: 1.1 varnish > 33 TxHeader c Connection: close > 33 Length c 2623 > 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 > 0.000037670 2.887368441 0.000037193 > 33 SessionClose c error > 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From dhelkowski at sbgnet.com Wed Mar 9 00:09:59 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 18:09:59 -0500 (EST) Subject: Lots of configs In-Reply-To: <7141.1299623778@critter.freebsd.dk> Message-ID: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Please refrain from continuing to message the list on this topic. I will not do so either, provided you stop sending things like 'David is wrong, and his ideas should never be considered' to the list. It is entirely childish, and I am sure people are sick of seeing this sort of garbage in the list. My only response to this latest attack is that Varnish is open source software. I can and will publish a how-to on using hashing in the manner that I have described. There is nothing that you can do to stop it, and I am sure people will take advantage of it. ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 5:36:18 PM Subject: Re: Lots of configs In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Mar 9 00:45:57 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 23:45:57 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:09:59 EST." <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <7556.1299627957@critter.freebsd.dk> In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >Please refrain from continuing to message the list on this topic. I prefer the archives show the full exchange, should any of your future potential employers google your name. If you do not like that, then you should think carefully about what you post in public. >My only response to this latest attack is that Varnish is open >source software. I can and will publish a how-to on using hashing >in the manner that I have described. There is nothing that you can >do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jhalfmoon at milksnot.com Wed Mar 9 00:50:45 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Wed, 09 Mar 2011 00:50:45 +0100 Subject: Lots of configs In-Reply-To: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <4D76C0D5.9030809@milksnot.com> On 03/09/2011 12:09 AM, David Helkowski wrote: > Please refrain from continuing to message the list on this topic. > I will not do so either, provided you stop sending things like Are you proposing some kind of 'hushing' algorithm? > 'David is wrong, and his ideas should never be considered' to the list. > It is entirely childish, and I am sure people are sick of seeing this > sort of garbage in the list. > > My only response to this latest attack is that Varnish is open > source software. I can and will publish a how-to on using hashing > in the manner that I have described. There is nothing that you can > do to stop it, and I am sure people will take advantage of it. > From geoff at uplex.de Wed Mar 9 00:58:18 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 09 Mar 2011 00:58:18 +0100 Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> <4D76C0D5.9030809@milksnot.com> Message-ID: <4D76C29A.1030502@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 3/9/11 12:50 AM, Johnny Halfmoon wrote: > > Are you proposing some kind of 'hushing' algorithm? Some threads fail silently, whereas other fail verbosely. - -- UPLEX Systemoptimierung Schwanenwik 24 22087 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNdsKZAAoJEOUwvh9pJNUR6UUQAKICKY6d76uO3NIDOtoQm7y2 AjqhL9M12lewkF3zWTfBnZjiiOSnifIXcdgBIipOE/VgkUIdxTO5PprIB9Zw/IYF YoDlHyqZxpdZvSFFeMaxR/hG08RQCaT3bQ7DQaX6XEM7hO5dYaYNY7Se9SPfQoIJ sOn/W/+UtQMZokhc1onXWp59ePIgZAUulqzdtDMmTBt51RXnyDLwvgiYAwOeCpUs t1/BW6tZ+Oc6F5MvtcLdN2z/8xYEcwyFgNCh1xaqHoytu/6VPmIWEubl3ATStMM1 BDf6Qa3CUCoDiWqEhb6iU3jCMVhVQRYfKku5uXL9kreV+Ilki6egTpVy8T9Q4AfI 2VZJuriQnsLWJn5gU8Ue2Ax1t3Pi5VKD/EOD3OdTLzfLGb53AtVHtj7QsI2EOqpr /KYnbylVfVv15luhm9NFyHF6yt3yJ2Ox8LqXu4RGCJ9iKwAdjOmHpNi75yNadRj1 nzoxlMBPt+56+8yfjpbfndFY7GdBeW8H7sOCl4L9fTjwo087mGjEZQgertpMpujs c/1BvxOFvpzUVFCbYzEYFXaKz1o+pVzONev03S4praOyUMjRcuWaGU9anIO0w5cO ue8kY21o5lYPpkpmYUud+X1oECnMkHToOUmqDh6avno14vB/IrvaRqDWEn/VtsYh cMNzqqwbBL4hi8FHS6J+ =bbhj -----END PGP SIGNATURE----- From drew.smathers at gmail.com Wed Mar 9 01:23:19 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 19:23:19 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 3:51 PM, Per Buer wrote: > Hi Drew, list. > On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers > wrote: >> >> Sorry to bump my own thread, but does anyone know of a way to set >> saintmode if a backend is down, vs. up and misbehaving (returning 500, >> etc)? >> >> Also, I added a backend probe and this indeed caused grace to kick in >> once the probe determined the backend as sick.I think the docs should >> be clarified if this isn't a bug (grace not working without probe): >> >> http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers > > Check out the trunk version of the docs. Committed some?earlier?today. > Thanks, I see a lot is getting >> >> Finally it's somewhat disconcerting that in the interim between a >> cache expiry and before varnish determines a backend as down (sick) it >> will 503 - so this could affect many clients during that window. >> Ideally, I'd like to successfully service requests if there's an >> object in the cache - period - but I guess this isn't possible now >> with varnish? > > Actually it is. In the docs there is a somewhat dirty trick where set a > marker in vcl_error, restart and pick up on the error and switch backend to > one that is permanetly down. Grace kicks in and serves the stale content. > Sometime post 3.0 there will be a refactoring of the whole vcl_error > handling and we'll end up with something a bit more elegant. > Well a dirty trick is good enough if makes a paying customer for me. :P This is working perfectly now. I would suggest giving an example of "magic marker" mentioned in the document which mentions the trick (http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html). Here's a stripped down version of my VCL incorporating the trick: backend webapp { .host = "127.0.0.1"; .port = "8000"; .probe = { .url = "/hello/"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } /* A backend that will always fail. */ backend failapp { .host = "127.0.0.1"; .port = "9000"; .probe = { .url = "/hello/"; .interval = 12h; .timeout = 1s; .window = 1; .threshold = 1; } } sub vcl_recv { if (req.http.X-Varnish-Error == "1") { set req.backend = failapp; unset req.http.X-Varnish-Error; } else { set req.backend = webapp; } if (! req.backend.healthy) { set req.grace = 24h; } else { set req.grace = 1m; } } sub vcl_error { if ( req.http.X-Varnish-Error != "1" ) { set req.http.X-Varnish-Error = "1"; return (restart); } } sub vcl_fetch { set beresp.grace = 24h; } From dhelkowski at sbgnet.com Wed Mar 9 01:48:30 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:48:30 -0500 (EST) Subject: Lots of configs In-Reply-To: <7556.1299627957@critter.freebsd.dk> Message-ID: <978396342.836068.1299631710854.JavaMail.root@mail-01.sbgnet.com> ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 6:45:57 PM Subject: Re: Lots of configs In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >>Please refrain from continuing to message the list on this topic. >I prefer the archives show the full exchange, should any of your >future potential employers google your name. Once again; this is pretty rude. My point is not to waste people's energy reading this, not to attempt to hide anything. At a previous point in my past; I had my entire diary posted on the internet; over 1 million words. You won't find that I am the sort of person's who attempt to hide anything. >If you do not like that, then you should think carefully about >what you post in public. I agree with that, but I think that you are responsible for how you treat or abuse others in public. If you are in a position of authority and knowledge you should treat those beneath you well; not mock them. >>My only response to this latest attack is that Varnish is open >>source software. I can and will publish a how-to on using hashing >>in the manner that I have described. There is nothing that you can >>do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dhelkowski at sbgnet.com Wed Mar 9 01:51:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:51:01 -0500 (EST) Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> Message-ID: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> ---- Original Message ----- From: "Johnny Halfmoon" To: "David Helkowski" Cc: "Poul-Henning Kamp" , varnish-misc at varnish-cache.org, "Jonathan DeMello" Sent: Tuesday, March 8, 2011 6:50:45 PM Subject: Re: Lots of configs On 03/09/2011 12:09 AM, David Helkowski wrote: >> Please refrain from continuing to message the list on this topic. >> I will not do so either, provided you stop sending things like >Are you proposing some kind of 'hushing' algorithm? I posted this request at the suggestion of a 3rd party; because I did not wish to waste people's time. Seeing as PHK is essentially the authority and controller of the list; I am going to continue responding as appropriate unless I am directed not to by PHK. >> 'David is wrong, and his ideas should never be considered' to the list. >> It is entirely childish, and I am sure people are sick of seeing this >> sort of garbage in the list. >> >> My only response to this latest attack is that Varnish is open >> source software. I can and will publish a how-to on using hashing >> in the manner that I have described. There is nothing that you can >> do to stop it, and I am sure people will take advantage of it. > From straightflush at gmail.com Wed Mar 9 03:06:30 2011 From: straightflush at gmail.com (AD) Date: Tue, 8 Mar 2011 21:06:30 -0500 Subject: Lots of configs In-Reply-To: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> References: <4D76C0D5.9030809@milksnot.com> <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> Message-ID: As the OP, i would like to get the discussion on this thread back to something useful. That being said... Assuming there was an O(1) (or some ideal) mechanism to lookup req.host and map it to a custom function, i notice that i get the error "Unused function custom_host" if there is not an explicit call in the VCL. Aside from having a dummy subroutine that listed all the "calls", is there a cleaner way to deal with this? I am also going to take a stab at making this a module, i already did this with an md5 function so I think that will solve the "pre-loading" problem. Adam On Tue, Mar 8, 2011 at 7:51 PM, David Helkowski wrote: > ---- Original Message ----- > From: "Johnny Halfmoon" > To: "David Helkowski" > Cc: "Poul-Henning Kamp" , > varnish-misc at varnish-cache.org, "Jonathan DeMello" < > demello.itp at googlemail.com> > Sent: Tuesday, March 8, 2011 6:50:45 PM > Subject: Re: Lots of configs > > > On 03/09/2011 12:09 AM, David Helkowski wrote: > >> Please refrain from continuing to message the list on this topic. > >> I will not do so either, provided you stop sending things like > > > >Are you proposing some kind of 'hushing' algorithm? > > I posted this request at the suggestion of a 3rd party; because I did > not wish to waste people's time. Seeing as PHK is essentially the authority > and controller of the list; I am going to continue responding as > appropriate > unless I am directed not to by PHK. > > >> 'David is wrong, and his ideas should never be considered' to the list. > >> It is entirely childish, and I am sure people are sick of seeing this > >> sort of garbage in the list. > >> > >> My only response to this latest attack is that Varnish is open > >> source software. I can and will publish a how-to on using hashing > >> in the manner that I have described. There is nothing that you can > >> do to stop it, and I am sure people will take advantage of it. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Mar 9 09:18:52 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 08:18:52 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 21:06:30 EST." Message-ID: <9583.1299658732@critter.freebsd.dk> In message , AD w rites: >As the OP, i would like to get the discussion on this thread back to >something useful. That being said... Arthur and I brainstormed this issue on our way to cake after VUG3 and a couple of ideas came up which may be worth looking at. At the top-heavy end, is having VCL files tell which domains they apply to, possibly something like: domains { "*.example.com"; ! "notthisone.example.com"; "*.example.biz"; } There are a large number of "what happens if I then do..." questions that needs answered sensibly to make that work, but I think it is doable and worthwhile. The next one we talked about is letting backend declarations declare which domains they apply to, pretty much same syntax as above, now just inside a backend. This would modify the current default backend selection and nothing more. There needs to be some kind of "matched no backend" handling. And finally, since most users with massive domains will need or want to reload VCL for trivial addition/removals of domains, somebody[TM] should probably write a VMOD which looks a domain up in a suitable database file (db/dbm/whatever) There are many ways we can mould and modify these ideas, and I invite you to hash out which way you would prefer it work... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From david at firechaser.com Wed Mar 9 09:41:14 2011 From: david at firechaser.com (David Murphy) Date: Wed, 9 Mar 2011 09:41:14 +0100 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: >And finally, since most users with massive domains will need or want >to reload VCL for trivial addition/removals of domains, somebody[TM] >should probably write a VMOD which looks a domain up in a suitable >database file (db/dbm/whatever) I was wondering, is there any way for us to be able to run an external lookup that can form part of decision-making in VCL. For example, a file or db lookup to see if a value is true/false and that will determine which sections of VCL code run? A real-world example is where we have a waiting room feature that aims to limit traffic reaching a payment portal. When the waiting room is on we'd like Varnish to hold onto the traffic. When turned off we would then forward the requests hitting VCL to the payment system. Currently we are doing this in the backend with a PHP / MySQL lookup and it works, but it's far from ideal. Perhaps a better way would be to pass in the true/false value as a command line arg to Varnish as a 'reload' rather than restart (similar to Apache, I guess) so we don't lose connections. Would also mean that no lookups are required per request. The waiting room state changes on/off only a few times a day. Not sure if this is possible or even desirable but would appreciate your thoughts/suggestions. Thanks, David From phk at phk.freebsd.dk Wed Mar 9 10:01:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 09:01:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Wed, 09 Mar 2011 09:41:14 +0100." Message-ID: <9904.1299661268@critter.freebsd.dk> In message , Davi d Murphy writes: >>And finally, since most users with massive domains will need or want >>to reload VCL for trivial addition/removals of domains, somebody[TM] >>should probably write a VMOD which looks a domain up in a suitable >>database file (db/dbm/whatever) > >I was wondering, is there any way for us to be able to run an external >lookup that can form part of decision-making in VCL. For example, a >file or db lookup to see if a value is true/false and that will >determine which sections of VCL code run? Writing a VMOD that does that shouldn't be hard, we just need to find somebody with a pocket full of round tuits. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From paul.lu81 at gmail.com Tue Mar 8 18:50:44 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Tue, 8 Mar 2011 09:50:44 -0800 Subject: A lot of if statements to handle hostnames In-Reply-To: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: Primarily just to make the code cleaner and a little concerned if I have a lot of hostnames. 100 for example. Having to potentially traverse several if statements for each request seems inefficient to me. Thank you, Paul On Mon, Mar 7, 2011 at 11:32 PM, Indranil Chakravorty < indranilc at rediff-inc.com> wrote: > Apart from improving the construct to if ... elseif , could you please tell > me the reason why you are looking for a different way? Is it only for ease > of writing less statements or is there some other reason you foresee? I am > asking because we also have a number of similar construct in our vcl. > Thanks. > > Thanks, > Neel > > On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu wrote > > >Hi, > > > >I have to work with a lot of domain names in my varnish config and I was > wondering if there is an easier to way to match the hostname other than a > series of if statements. Is there anything like a hash? Or does anybody have > any C code to do this? > > > >example pseudo code: > >================================= > >vcl_recv(){ > > > > if(req.http.host == "www.domain1.com") > > { > > set req.backend = www_domain1_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain2.com") > > { > > set req.backend = www_domain2_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain3.com") > > { > > set req.backend = www_domain3_com; > > # more code > > return(lookup); > > } > >} > >================================= > > > >Thank you, > >Paul > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 9 10:13:18 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 9 Mar 2011 10:13:18 +0100 Subject: A lot of if statements to handle hostnames In-Reply-To: References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: On Tue, Mar 8, 2011 at 6:50 PM, Paul Lu wrote: > Primarily just to make the code cleaner and a little concerned if I have a > lot of hostnames. 100 for example. Having to potentially traverse several > if statements for each request seems inefficient to me. > Don't worry about it. I think we've clearly established that it isn't (in a parallel thread). -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Wed Mar 9 13:05:31 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Wed, 9 Mar 2011 12:05:31 +0000 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: <69A34F57-3B91-41B0-8DD8-49191E80E268@gmail.com> On 9 Mar 2011, at 08:18, Poul-Henning Kamp wrote: > In message , AD w > rites: > >> As the OP, i would like to get the discussion on this thread back to >> something useful. That being said... > > Arthur and I brainstormed this issue on our way to cake after VUG3 > and a couple of ideas came up which may be worth looking at. > > At the top-heavy end, is having VCL files tell which domains they > apply to, possibly something like: > > domains { > "*.example.com"; > ! "notthisone.example.com"; > "*.example.biz"; > } I was chatting to a Varnish administrator at a PHP conference in London a couple of weeks ago. They run Varnish for a very high profile site which has lots of sub-sites that have delegated web teams. So, for example, all traffic to www.example.com hits Varnish, and www.example.com/alpha is managed by a completely separate team to www.example.com/beta. Thanks to Varnish, each base path can be routed to different backends. However, the varnish behaviour itself is different for different paths. My understanding is that each team submits their VCL to the central administrator who sticks it together, and that each path/site has a separate set of vcl_* functions. Whilst I obviously don't know exactly how they're doing this, I think that this different level of behaviour splitting would be worth considering as part of these discussions. So, perhaps it might make sense to have individual VCL files that declare what they're interested in, such as: ==alpha.vcl== appliesto { "alpha"; "alpha.example.com"; "alpha.example.co.uk"; } sub vcl_recv { set req.backend = alphapool; } == and then in the main VCL, do something pseudo-code like: ==default.vcl== include "alpha.vcl" sub vcl_recv { if (req.http.host == "www.example.com") { /* Do some regex to find the first part of the path, and see if there's a valid config for it */ callconfig(reg_sub(req.url, "/(.*)(/.*)?", "\1")); } else { /* Try to see if there's a match for this hostname */ callconfig(req.http.host); } /* By this point, nothing has matched, so call some default behaviour */ callconfig("default"); } == So, callconfig effectively works a bit like the current 'return' statement, but only if a config that 'appliesto' the defined string is found in a config - once the config is called, no further code in the calling function is executed. By detaching this behaviour from the concept of a "domain" in PHK's example, then this pattern could be used for a wider range of scenarios - perhaps switching based on the requestor's IP / ACL matches or whatever else Varnish users might need. Rob From scaunter at topscms.com Wed Mar 9 16:11:05 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 9 Mar 2011 10:11:05 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <8371073562863633333@unknownmsgid> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> I don't think pass or pipe is the issue. 503 means the backend can't answer, and calling pipe won't change that. Here's an example. Set up a "patient" back end; you can collect your back ends into a patient director. backend waitalongtime { .host = "a.b.c.d"; .port = "80"; .first_byte_timeout = 60s; .between_bytes_timeout = 10s; .probe = { .url = "/areyouthere/"; .timeout = 10s; .interval = 15s; .window = 5; .threshold = 1; } } Check the number of restarts before you select a back end. Try your normal, fast director first. if (req.restarts == 0) { set req.backend = fast; } else if (req.restarts == 1) { set req.backend = waitalongtime; } else if (req.restarts == 2) { set req.backend = waitalongtime; } else { set req.backend = waitalongtime; } If you get a 503, catch it in error, and increment restart. This will select the slow back end. sub vcl_error { if (obj.status == 503 && req.restarts < 4) { restart; } } Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 5:32 PM To: Stewart Robinson Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish 503ing on ~1/100 POSTs On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ronan at iol.ie Wed Mar 9 16:38:28 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 9 Mar 2011 15:38:28 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> References: <8371073562863633333@unknownmsgid> <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> Message-ID: Hi Stephan, On Wed, 9 Mar 2011, Caunter, Stefan wrote: > I don't think pass or pipe is the issue. 503 means the backend can't > answer, and calling pipe won't change that. > Set up a "patient" back end; you can collect your back ends into a > patient director. Ah, the penny drops. I was thinking of "patient" in the context of health checks (ie a sick backend). I'll give it a go, but my gut feeling is that the backends aren't at fault. I'm seeing this error when they are both backends lightly loaded (load average around 1 on an 8 core box), and the rate of incidence does not appear to be related to the load - I actually saw a slightly lower rate (under 1%) last night when utilisation was higher, and as I said previously when I used pound instead of varnish this wasn't a problem. I'll try the patient backend and keep a close eye on the error rate vs utilisation over the next few days. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 5:32 PM > To: Stewart Robinson > Cc: varnish-misc at varnish-cache.org > Subject: Re: Varnish 503ing on ~1/100 POSTs > > On Tue, 8 Mar 2011, Stewart Robinson wrote: > > > Whilst this may not be a fix to a possible bug in varnish have you > tried > > switching posts to pipe instead of pass? > > This might well help, but I'd have no way of knowing for sure. The > backend servers indicate the requests via varnish are processed > correctly. > I'm not able to reproduce the problem at will so I'd be relying on user > feedback to determine if the problem still occurs and that's unreliable > at > best. > > It is of course better than having the problem occur, but I'd rather > take > the opportunity to try and get to the bottom of it while I can. I only > deployed varnish a couple of days ago. The site will be fairly quiet > until the end of the week. I'll resort to pipe if I've not got a fix by > then. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From nadahalli at gmail.com Wed Mar 9 23:46:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 9 Mar 2011 17:46:58 -0500 Subject: Duplicate Purges / Purge.Length size reduction Message-ID: Hello All. I have a few questions on the length of the purge.list. 1 - Is it something to be worried about? What's the optimal n_struct_object to n_active_purges ratio? 2 - If I have periodic purge adds that are adding the same URL pattern to be purged, does varnish do any internal optimization? 3 - Is it better to have a ban lurker in place to keep the purge.list length under check? -T -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Thu Mar 10 08:08:07 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Thu, 10 Mar 2011 08:08:07 +0100 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: (Ronan Mullally's message of "Tue, 8 Mar 2011 20:38:08 +0000 (GMT)") References: Message-ID: <871v2fwizs.fsf@qurzaw.varnish-software.com> ]] Ronan Mullally | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. | The current load is light (up to a couple of thousand active sessions, | peak is around five thousand). Varnish has a fairly simple config with | a director consisting of two Apache backends: This looks a bit odd: | backend backend1 { | .host = "1.2.3.4"; | .port = "80"; | .connect_timeout = 5s; | .first_byte_timeout = 90s; | .between_bytes_timeout = 90s; | A typical request is below. The first attempt fails with: | | 33 FetchError c http first read error: -1 0 (Success) This just means the backend closed the connection on us. | there is presumably a restart and the second attempt (sometimes to | backend1, sometimes backend2) fails with: | | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This is a timeout, however: | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 That 2.89s backend response time doesn't add up with your timeouts. Can you see if you can get a tcpdump of what's going on? Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From ronan at iol.ie Thu Mar 10 14:29:23 2011 From: ronan at iol.ie (Ronan Mullally) Date: Thu, 10 Mar 2011 13:29:23 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Hej Tollef, On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? I'll see what I can do. Varnish is serving an average of about 20 objects per second so there'll be a lot of data to gather / sift through. The following numbers might prove useful - they're counts of the number of successful GETs, POSTs and 503s since 17:00 yesterday. GET POST Hour 200 503 200 503 ------------------------------------------ 17:00 72885 0 (0.00%) 841 0 (0.00%) 18:00 69266 0 (0.00%) 858 6 (0.70%) 19:00 65030 0 (0.00%) 866 3 (0.35%) 20:00 70289 0 (0.00%) 975 8 (0.82%) 21:00 105767 0 (0.00%) 1214 5 (0.41%) 22:00 86236 0 (0.00%) 834 3 (0.36%) 23:00 67078 0 (0.00%) 893 2 (0.22%) 00:00 48042 0 (0.00%) 669 4 (0.60%) 01:00 35966 0 (0.00%) 479 0 (0.00%) 02:00 29598 0 (0.00%) 395 3 (0.76%) 03:00 25819 0 (0.00%) 359 0 (0.00%) 04:00 22835 0 (0.00%) 366 4 (1.09%) 05:00 24487 0 (0.00%) 315 1 (0.32%) 06:00 26583 0 (0.00%) 353 4 (1.13%) 07:00 30433 0 (0.00%) 398 2 (0.50%) 08:00 37394 0 (0.00%) 363 9 (2.48%) 09:00 44462 1 (0.00%) 526 4 (0.76%) 10:00 49891 2 (0.00%) 611 4 (0.65%) 11:00 54826 1 (0.00%) 599 7 (1.17%) 12:00 60765 6 (0.01%) 615 1 (0.16%) 13:00 18941 0 (0.00%) 190 0 (0.00%) Apart from a handful of 503s to GET requests this morning (which I've not had a chance to investigate) the problem almost exclusively affects POSTs. The frequency of the problem does not appear to be related to the load - the highest incidence does not match the busiest periods. I'll get back to you when I have a few packet traces. It will most likely be next week. FWIW, I forgot to mention in my previous posts, I'm running 2.1.5 on a Debian Lenny VM. -Ronan From allan_wind at lifeintegrity.com Thu Mar 10 16:29:18 2011 From: allan_wind at lifeintegrity.com (Allan Wind) Date: Thu, 10 Mar 2011 15:29:18 +0000 Subject: SSL Message-ID: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Is the current thinking still that SSL support will not be integrated into varnish? I found the post in the archives from last year that speaks of nginx as front-end. Has anyone looked into the other stunnel or pound and could share their experience? I cannot tell from their web site if haproxy added SSL support yet. Here is what the pound web site[1] says about stunnel: stunnel: probably comes closest to my understanding of software design (does one job only and does it very well). However, it lacks the load balancing and HTTP filtering features that I considered necessary. Using stunnel in front of Pound (for HTTPS) would have made sense, except that integrating HTTPS into Pound proved to be so simple that it was not worth the trouble. [1] http://www.apsis.ch/pound/ /Allan -- Allan Wind Life Integrity, LLC From phk at phk.freebsd.dk Thu Mar 10 16:41:00 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 10 Mar 2011 15:41:00 +0000 Subject: SSL In-Reply-To: Your message of "Thu, 10 Mar 2011 15:29:18 GMT." <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <82042.1299771660@critter.freebsd.dk> In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ es: >Is the current thinking still that SSL support will not be >integrated into varnish? Yes, that is current thinking. I can see no advantages that outweigh the disadvantages, and a realistic implementation would not be significantly different from running a separate process to do the job in the first place. http://www.varnish-cache.org/docs/trunk/phk/ssl.html -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From sime at sime.net.au Fri Mar 11 08:23:07 2011 From: sime at sime.net.au (Simon Males) Date: Fri, 11 Mar 2011 18:23:07 +1100 Subject: SSL In-Reply-To: <20110310152918.GJ1675@vent.lifeintegrity.localnet> References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind wrote: > Is the current thinking still that SSL support will not be > integrated into varnish? ?I found the post in the archives from > last year that speaks of nginx as front-end. ?Has anyone looked > into the other stunnel or pound and could share their experience? > I cannot tell from their web site if haproxy added SSL support > yet. Using pound 2.4.3 (a little dated) over here. Works well. I've found pound will throw errors in /var/log a few seconds after a Chrome connection (Connection timed out). Though this isn't reflected on the client side. Hate to crap on pound's parade, but I've also some client side errors, but they are not reproducible on demand. http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 -- Simon Males From michal.taborsky at nrholding.com Fri Mar 11 09:31:00 2011 From: michal.taborsky at nrholding.com (Michal Taborsky) Date: Fri, 11 Mar 2011 09:31:00 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79DDC4.6010606@nrholding.com> Dne 11.3.2011 8:23, Simon Males napsal(a): > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 As far as I know, Chrome uses pre-connect to improve performance. What it does, is it creates immediately more than one TCP/IP connection to the target IP address, because most pages contain images and styles and javascript, and Chrome knows, that it will be downloading these in parallel. So it saves time on handshaking, when the connections are needed later. It will also keep the connections open for quite a long time and maybe pound times out these connections when nothing is happening. This sort of thing can happen with any browser, but I think Chrome is a lot more aggressive than others, so it stands out. -- Michal T?borsk? chief systems architect Netretail Holding, B.V. http://www.nrholding.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.boer at bizztravel.nl Fri Mar 11 08:53:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Fri, 11 Mar 2011 08:53:05 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79D4E1.4060400@bizztravel.nl> We use Pound as well. It works fine. Regards, Martin On 03/11/2011 08:23 AM, Simon Males wrote: > On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind > wrote: >> Is the current thinking still that SSL support will not be >> integrated into varnish? I found the post in the archives from >> last year that speaks of nginx as front-end. Has anyone looked >> into the other stunnel or pound and could share their experience? >> I cannot tell from their web site if haproxy added SSL support >> yet. > Using pound 2.4.3 (a little dated) over here. Works well. > > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 > From lampe at hauke-lampe.de Sun Mar 13 00:31:41 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sun, 13 Mar 2011 00:31:41 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D7C025D.7060603@hauke-lampe.de> On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. Thanks again, it works! I adapted the patch for varnish 2.1.5: http://cfg.openchaos.org/varnish/patches/varnish-2.1.5-cache_restart.diff A working example can be seen here: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl Hauke. From checker at d6.com Sun Mar 13 05:28:00 2011 From: checker at d6.com (Chris Hecker) Date: Sat, 12 Mar 2011 20:28:00 -0800 Subject: best way to not cache large files? Message-ID: <4D7C47D0.9050809@d6.com> I have a 400mb file that I just want apache to serve. What's the best way to do this? I can put it in a directory and tell varnish not to cache stuff that matches that dir, but I'd rather just make a general rule that varnish should ignore >=20mb files or whatever. Thanks, Chris From straightflush at gmail.com Sun Mar 13 15:26:54 2011 From: straightflush at gmail.com (AD) Date: Sun, 13 Mar 2011 10:26:54 -0400 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: i dont think you can check the body size (at least it seems that way with the existing req.* objects ). If you know the mime-type of the file you might just be able to pipe the mime type if that works for all file sizes ? I wonder if there is a way to pass the req object into some inline C that can access the body somehow? On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best way > to do this? I can put it in a directory and tell varnish not to cache stuff > that matches that dir, but I'd rather just make a general rule that varnish > should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kacperw at gmail.com Sun Mar 13 17:30:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Sun, 13 Mar 2011 17:30:44 +0100 Subject: VCL BNF Message-ID: Varnish Control Language grammar in BNF notation ================================================ The VCL compiler is a one-step lex-parse-prune-symtable-typecheck-emit compiler. Having looked for it several times myself, and having discussed it with several others the conclusion was that VCL needs a proper grammar. Grammars, as many know, are useful in several circumstances. BNF based on PHK's precedence rules http://www.varnish-cache.org/docs/trunk/phk/vcl_expr.html as well as vcc_Lexer and vcc_Parse from HEAD. For those of us unfamiliar with BNF: http://www.cui.unige.ch/db-research/Enseignement/analyseinfo/AboutBNF.html Note on BNF syntax: As the BNF canon is somewhat unweildy, I've opted for the convention of specifying terminal tokens in lowercase, while non-terminals are denoted in UPPERCASE. Optional statements are the usual [..] and repeated statements are {..}. To improve portability there are quotes around literals as this does not sacrifice readability. As for token and production names, I've tried to stay as true to the source code as possible without sacrificing readability. As an extension to BNF I have included comments, which are lines starting with '#'. I have attempted to comment grammar particular to major versions of Varnish and other notables. I have not backward-checked the grammar, and would appreciate comments on what grammar differences we see in V2.0 and 2.1 as compared to 3.0. There are bound to be bugs. Feedback and comments appreciated. v0.1 .. not yet machine parsable(?)! Nonterminals ------------ VCL ::= ACL | SUB | BACKEND | DIRECTOR | PROBE | IMPORT | CSRC ACL ::= 'acl' identifier '{' {ACLENTRY} '}' SUB ::= 'sub' identifier COMPOUND BACKEND ::= 'backend' identifier '{' { ['set|backend'] BACKENDSPEC } '}' PROBE ::= 'probe' identifier '{' PROBESPEC '}' # VMod imports are new in 3.0 IMPORT ::= 'import' identifier [ 'from' string ] ';' CSRC ::= 'C{' inline-c '}C' # director definitions - simple variant DIRECTOR ::= 'director' dirtype identifier '{' DIRSPEC '}' dirtype ::= 'hash' | 'random' | 'client' | 'round-robin' | 'dns' # can do better: specify production rule for every director type DIRECTOR ::= 'director' ('hash'|'random'|'client')' identifier '{' DIRSPEC '}' 'director' 'round-robin' identifier '{' { '.' BACKENDEF } '}' 'director' 'dns' identifier '{' DNSSPEC '}' DIRSPEC ::= [ '.' 'retries' '=' uintval ';' ] { '{' '.' BACKENDEF [ '.' 'weight' '=' numval ';' ] '}' } DNSSPEC ::= { '.' BACKENDEF } [ '.' 'ttl' '=' timeval ';' ] [ '.' 'suffix' '=' string ';' ] [ '.' DNSLIST ] DNSLIST ::= '{' { iprange ';' [ BACKENDSPEC ] } '}' BACKENDEF ::= 'backend' ( BACKENDSPEC | identifier ';' ) # field spec as used in backend and probe definitions SPEC ::= '{' { '.' identifier = fieldval ';' } '}' # can do better: devil is in the detail on this one BACKENDSPEC ::= '.' 'host' '=' string ';' | '.' 'port' '=' string ';' # wow I had no idea... | '.' 'host_header' '=' string ';' | '.' 'connect_timeout''=' timeval ';' | '.' 'first_byte_timeout' '=' timeval ';' | '.' 'between_bytes_timeout' '=' timeval ';' | '.' 'max_connections '=' uintval ';' | '.' 'saintmode_treshold '=' uintval ';' | '.' 'probe' '{' {PROBESPEC} '}' ';' # another woww \0/ | '.' 'probe' identifier; PROBESPEC ::= '.' 'url' = string ';' | '.' 'request' = string ';' | '.' 'expected_response' = uintval ';' | '.' 'timeout' = timeval ';' | '.' 'interval' = timeval ';' | '.' 'window' = uintval ';' | '.' 'treshold' = uintval ';' | '.' 'initial' = uintval ';' # there is no room in BNF for 'either !(..) or (!..) or !..' (parens optional) ACLENTRY ::= ['!'] ['('] ['!'] iprange [')'] ';' # totally avoids dangling else yarr IFSTMT ::= 'if' CONDITIONAL COMPOUND [ { ('elsif'|'elseif') CONDITIONAL COMPOUND } [ 'else' COMPOUND ]] CONDITIONAL ::= '(' EXPR ')' COMPOUND ::= '{' {STMT} '}' STMT ::= COMPOUND | IFSTMT | CSRC | ACTIONSTMT ';' ACTIONSTMT ::= ACTION | FUNCALL ACTION :== 'error' [ '(' EXPR(int) [ ',' EXPR(string) ] ')' | EXPR(int) [ EXPR(string) ] | 'call' identifier # in vcl_fetch only | 'esi' # in vcl_hash only | 'hash_data' '(' EXPRESSION ')' | 'panic' EXPRESSION # note: purge expressions are semantically special | 'purge' '(' EXPRESSION ')' | 'purge_url' '(' EXPRESSION ')' | 'remove' variable # V2.0: could do actions without return keyword | 'return' '(' ( deliver | error | fetch | hash | lookup | pass | pipe | restart ) ')' # rollback what? | 'rollback' | 'set' variable assoper EXPRESSION | 'synthetic' EXPRESSION | 'unset' variable FUNCALL ::= variable '(' [ { FUNCALL | expr | string-list } ] ')' EXPRESSION ::= 'true' | 'false' | constant | FUNCALL | variable | '(' EXPRESSION ')' | number '*' number | number '/' number # add two strings without operator in 2.x series | duration '*' doubleval | string '+' string | number '+' number | number '-' number | timeval '+' duration | timeval '-' duration | timeval '-' timeval | duration '+' duration | duration '-' duration | EXPRESSION comparison EXPRESSION | '!' EXPRESSION | EXPRESSION '&&' EXPRESSION | EXPRESSION '||' EXPRESSION Terminals: ----------------- timeval ::= doubleval timeunit duration ::= ['-'] timeval doubleval ::= { number [ '.' [number] ] } timeunit ::= 'ms' | 's' | 'm' | 'h' | 'd' | 'w' uintval ::= { number } # unsigned fieldval ::= timeval | doubleval | timeunit | uintval constant ::= string | fieldval iprange ::= string [ '/' number ] variable ::= identifier [ '.' identifier ] comparison ::= '==' | '!=' | '<' | '>' | '<= | '>=' | '~' | '!~' assoper ::= '=' | '+=' | '-=' | '*=' | '/=' | comment ::= /* !(/*|*/)* */ // !(\n)* $ # !(\n)* $ long-string ::= '{"' !("})* '"}' shortstring ::= '"' !(\")* '"' inline-c ::= !(('}C') string ::= shortstring | longstring identifier ::= [a-zA-Z][a-zA-Z0-9_-]* number ::= [0-9]+ Lexer tokens: ----------------- ! % & + * , - . / ; < = > { | } ~ ( ) != NEQ !~ NOMATCH ++ INC += INCR *= MUL -- DEC -= DECR /= DIV << SHL <= LEQ == EQ >= GEQ >> SHR || COR && CAND elseif ELSEIF elsif ELSIF include INCLUDE if IF # include statements omitted as they are pre-processed away, they are not a syntactic device. -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From phk at phk.freebsd.dk Sun Mar 13 17:39:32 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 13 Mar 2011 16:39:32 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:30:44 +0100." Message-ID: <10462.1300034372@critter.freebsd.dk> In message , Kacp er Wysocki writes: >Varnish Control Language grammar in BNF notation >================================================ Not bad! Put it in a wiki page. If you don't have wiki bit, contact me with your trac login, and I'll give you one. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Sun Mar 13 17:49:24 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 17:49:24 +0100 Subject: VCL BNF In-Reply-To: <10462.1300034372@critter.freebsd.dk> References: <10462.1300034372@critter.freebsd.dk> Message-ID: On Sun, Mar 13, 2011 at 5:39 PM, Poul-Henning Kamp wrote: > In message , > Kacp > er Wysocki writes: > > > >Varnish Control Language grammar in BNF notation > >================================================ > > Not bad! > > Put it in a wiki page. If you don't have wiki bit, contact me with > your trac login, and I'll give you one. > Shouldn't we rather keep it in the reference docs? -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Sun Mar 13 22:22:03 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 10:22:03 +1300 (NZDT) Subject: Always sending gzip? Message-ID: Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. Running 2.1.5 from the rpm packages Our config has: # If Accept-Encoding contains "gzip" then make it only include that. If not # then remove header completely. deflate just causes problems # if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|mp4|flv|pdf)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } But: $ curl -v -H "Accept-Encoding: fff" -H "Host: www.xxxx.com" http://yyy/themes/0/scripts/getTime.cfm > /dev/null > GET /themes/0/scripts/getTime.cfm HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Accept: */* > Accept-Encoding: fff > Host: www.xxxx.com > < HTTP/1.1 200 OK < Server: Apache < Cache-Control: max-age=300 < X-UA-Compatible: IE=EmulateIE7 < Content-Type: text/javascript < Proxy-Connection: Keep-Alive < Content-Encoding: gzip < Content-Length: 176 < Date: Sun, 13 Mar 2011 21:13:24 GMT < Connection: keep-alive < Cache-Info: Object-Age=228, hits=504, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 SessionOpen c 1.2.3.4 21147 :80 34 ReqStart c 1.2.3.4 21147 248469172 34 RxRequest c GET 34 RxURL c /themes/0/scripts/getTime.cfm 34 RxProtocol c HTTP/1.1 34 RxHeader c User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 34 RxHeader c Accept: */* 34 RxHeader c Accept-Encoding: fff 34 RxHeader c Host: www.xxx.com 34 VCL_call c recv lookup 34 VCL_call c hash hash 34 Hit c 248452316 34 VCL_call c hit deliver 34 VCL_call c deliver deliver 34 TxProtocol c HTTP/1.1 34 TxStatus c 200 34 TxResponse c OK 34 TxHeader c Server: Apache 34 TxHeader c Cache-Control: max-age=300 34 TxHeader c X-UA-Compatible: IE=EmulateIE7 34 TxHeader c Content-Type: text/javascript 34 TxHeader c Proxy-Connection: Keep-Alive 34 TxHeader c Content-Encoding: gzip 34 TxHeader c Content-Length: 176 34 TxHeader c Accept-Ranges: bytes 34 TxHeader c Date: Sun, 13 Mar 2011 21:11:36 GMT 34 TxHeader c Connection: keep-alive 34 TxHeader c Cache-Info: Object-Age=120, hits=243, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 Length c 176 34 ReqEnd c 248469172 1300050696.585048914 1300050696.585428953 0.000026941 0.000339031 0.000041008 -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From perbu at varnish-software.com Sun Mar 13 22:37:59 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 22:37:59 +0100 Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even > when we don't ask for it. > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it and Varnish will do the right thing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Sun Mar 13 22:51:24 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Sun, 13 Mar 2011 21:51:24 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Did anyone manage to find a workable solution for this? On 10 Mar 2011, at 13:29, Ronan Mullally wrote: > Hej Tollef, > > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > >> | 33 FetchError c http first read error: -1 0 (Success) >> >> This just means the backend closed the connection on us. >> >> | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) >> >> This is a timeout, however: >> >> | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 >> >> That 2.89s backend response time doesn't add up with your timeouts. Can >> you see if you can get a tcpdump of what's going on? > > I'll see what I can do. Varnish is serving an average of about 20 objects > per second so there'll be a lot of data to gather / sift through. > > The following numbers might prove useful - they're counts of the number of > successful GETs, POSTs and 503s since 17:00 yesterday. > > GET POST > Hour 200 503 200 503 > ------------------------------------------ > 17:00 72885 0 (0.00%) 841 0 (0.00%) > 18:00 69266 0 (0.00%) 858 6 (0.70%) > 19:00 65030 0 (0.00%) 866 3 (0.35%) > 20:00 70289 0 (0.00%) 975 8 (0.82%) > 21:00 105767 0 (0.00%) 1214 5 (0.41%) > 22:00 86236 0 (0.00%) 834 3 (0.36%) > 23:00 67078 0 (0.00%) 893 2 (0.22%) > 00:00 48042 0 (0.00%) 669 4 (0.60%) > 01:00 35966 0 (0.00%) 479 0 (0.00%) > 02:00 29598 0 (0.00%) 395 3 (0.76%) > 03:00 25819 0 (0.00%) 359 0 (0.00%) > 04:00 22835 0 (0.00%) 366 4 (1.09%) > 05:00 24487 0 (0.00%) 315 1 (0.32%) > 06:00 26583 0 (0.00%) 353 4 (1.13%) > 07:00 30433 0 (0.00%) 398 2 (0.50%) > 08:00 37394 0 (0.00%) 363 9 (2.48%) > 09:00 44462 1 (0.00%) 526 4 (0.76%) > 10:00 49891 2 (0.00%) 611 4 (0.65%) > 11:00 54826 1 (0.00%) 599 7 (1.17%) > 12:00 60765 6 (0.01%) 615 1 (0.16%) > 13:00 18941 0 (0.00%) 190 0 (0.00%) > > Apart from a handful of 503s to GET requests this morning (which I've not > had a chance to investigate) the problem almost exclusively affects POSTs. > The frequency of the problem does not appear to be related to the load - > the highest incidence does not match the busiest periods. > > I'll get back to you when I have a few packet traces. It will most likely > be next week. FWIW, I forgot to mention in my previous posts, I'm running > 2.1.5 on a Debian Lenny VM. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit http://www.messagelabs.com/email > ______________________________________________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. From simon at darkmere.gen.nz Mon Mar 14 00:12:50 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 12:12:50 +1300 (NZDT) Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, 13 Mar 2011, Per Buer wrote: > On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. > > > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it > and Varnish will do the right thing. Of course, it was turning up sometimes but not always. I've changed the backend to force it in and that seems to have fixed the problem (and hopefully another we are seeing). Thankyou -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From simon at darkmere.gen.nz Mon Mar 14 03:36:39 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 15:36:39 +1300 (NZDT) Subject: Refetch new page according to result? Message-ID: This looks impossible but I thought I'd ask. The idea I had was that the cache could fetch a page and according to the result fetch another page an serve that to the user. So I could look for a 301 and if the 301 pointed to my domain I could refetch the new URL and deliver that content (without giving the user a 301). However going through the docs this appears to be impossible since I won't know the result of the backend call until vcl_fetch or vcl_deliver and neither of these give me the option to go back to vcl_recv This is for archived pages, so the app would check the archive status early in the transaction and just return a quick pointer to the archive url (which might be just flat file on disk) which varnish could grab, serve and cache forever with the user not being redirected. -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From phk at phk.freebsd.dk Mon Mar 14 08:19:38 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:19:38 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:49:24 +0100." Message-ID: <22707.1300087178@critter.freebsd.dk> In message , Per Buer writes: >> >> >Varnish Control Language grammar in BNF notation >> >================================================ >> >> Not bad! >> >> Put it in a wiki page. If you don't have wiki bit, contact me with >> your trac login, and I'll give you one. >> > >Shouldn't we rather keep it in the reference docs? Works for me too -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Mon Mar 14 08:29:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 14 Mar 2011 08:29:56 +0100 Subject: Refetch new page according to result? In-Reply-To: (Simon Lyall's message of "Mon, 14 Mar 2011 15:36:39 +1300 (NZDT)") References: Message-ID: <87oc5erwgb.fsf@qurzaw.varnish-software.com> ]] Simon Lyall | However going through the docs this appears to be impossible since I | won't know the result of the backend call until vcl_fetch or | vcl_deliver and neither of these give me the option to go back to | vcl_recv You should be able to just change req.url and restart in vcl_fetch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From schmidt at ze.tum.de Mon Mar 14 08:45:06 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 08:45:06 +0100 Subject: SSL In-Reply-To: <82042.1299771660@critter.freebsd.dk> References: <82042.1299771660@critter.freebsd.dk> Message-ID: <4D7DC782.6050300@ze.tum.de> Am 10.03.2011 16:41, schrieb Poul-Henning Kamp: > In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ > es: >> Is the current thinking still that SSL support will not be >> integrated into varnish? > > Yes, that is current thinking. I can see no advantages that outweigh > the disadvantages, and a realistic implementation would not be > significantly different from running a separate process to do the > job in the first place. stunnel has the disatwantage that we loose the clientIP information. Intigration of SSL in varnish wouldn't have this problem. with pound thios can be fixen by analysing the forewarded-for header but isn't that elegant. Regards Estartu -- ---------------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de Technische Universit?t M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: +49 89 289-25270 | PGP-PublicKey Fax: +49 89 289-25257 | on request -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From phk at phk.freebsd.dk Mon Mar 14 08:55:40 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:55:40 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 08:45:06 +0100." <4D7DC782.6050300@ze.tum.de> Message-ID: <41707.1300089340@critter.freebsd.dk> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >stunnel has the disatwantage that we loose the clientIP information. Doesn't it set a header with this information ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 09:06:28 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 09:06:28 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <4D7DC782.6050300@ze.tum.de> <41707.1300089340@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:55 AM, Poul-Henning Kamp wrote: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > > >stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? > Yes. If we use the patched stunnel version that haproxy also uses. It requires Varnish to understand the protocol however, as the address of the client is sent at the beginning of the conversation in binary form. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 09:14:51 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 08:14:51 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 09:06:28 +0100." Message-ID: <41829.1300090491@critter.freebsd.dk> In message , Per Buer writes: >Yes. If we use the patched stunnel version that haproxy also uses. It >requires Varnish to understand the protocol however, as the address of the >client is sent at the beginning of the conversation in binary form. I would say "Use a more intelligent SSL proxy" then... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From rtshilston at gmail.com Mon Mar 14 09:22:02 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Mon, 14 Mar 2011 08:22:02 +0000 Subject: SSL In-Reply-To: <41829.1300090491@critter.freebsd.dk> References: <41829.1300090491@critter.freebsd.dk> Message-ID: <4A7E853B-F74C-415F-B324-6FEBCDA0D7E5@gmail.com> On 14 Mar 2011, at 08:14, Poul-Henning Kamp wrote: > In message , Per > Buer writes: > >> Yes. If we use the patched stunnel version that haproxy also uses. It >> requires Varnish to understand the protocol however, as the address of the >> client is sent at the beginning of the conversation in binary form. > > I would say "Use a more intelligent SSL proxy" then... We're using Varnish successfully with nginx. The config looks like: ===== worker_processes 1; error_log /var/log/nginx/global-error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { ssl on; ssl_certificate /etc/ssl/example.com.crt; ssl_certificate_key /etc/ssl/example.com.key; listen a.b.c.4 default ssl; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Proxy any requests to the local varnish instance location / { proxy_set_header "Host" $host; proxy_set_header "X-Forwarded-By" "Nginx-a.b.c.4"; proxy_set_header "X-Forwarded-For" $proxy_add_x_forwarded_for; proxy_pass a.b.c.5; } } } ==== From schmidt at ze.tum.de Mon Mar 14 09:34:41 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 09:34:41 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <41707.1300089340@critter.freebsd.dk> Message-ID: <4D7DD321.7000906@ze.tum.de> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > >> stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? It's a tunnel. It doesn't change the stream. As I said, we use pound because it sets the header. But its another daemon to run and to setup. Another component that could fail. Integrating SSL in varnish would reduce the complexity. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From kacperw at gmail.com Mon Mar 14 11:16:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 11:16:44 +0100 Subject: VCL BNF In-Reply-To: <22707.1300087178@critter.freebsd.dk> References: <22707.1300087178@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:19 AM, Poul-Henning Kamp wrote: > In message , Per > Buer writes: >>> >Varnish Control Language grammar in BNF notation >>> >>> Not bad! >>> >>> Put it in a wiki page. ?If you don't have wiki bit, contact me with >>> your trac login, and I'll give you one. >>> >> >>Shouldn't we rather keep it in the reference docs? > > Works for me too The BNF might not be 100% complete yet - there might be bugs - so wiki is appropriate. kwy is my trac login. 0K From phk at phk.freebsd.dk Mon Mar 14 11:23:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 10:23:08 +0000 Subject: VCL BNF In-Reply-To: Your message of "Mon, 14 Mar 2011 11:16:44 +0100." Message-ID: <69458.1300098188@critter.freebsd.dk> In message , Kacp er Wysocki writes: >The BNF might not be 100% complete yet - there might be bugs - so wiki >is appropriate. kwy is my trac login. Agreed. You should have wiki bit now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kacperw at gmail.com Mon Mar 14 12:05:27 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:05:27 +0100 Subject: SSL In-Reply-To: <4D7DD321.7000906@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: > Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >> >>> stunnel has the disatwantage that we loose the clientIP information. >> >> Doesn't it set a header with this information ? > > It's a tunnel. It doesn't change the stream. As I said, we use pound because > it sets the header. But its another daemon to run and to setup. Another > component that could fail. Integrating SSL in varnish would reduce the > complexity. What you meant to say is "integrating SSL in Varnish would increase complexity". Putting that component inside varnish doesn't automatically make it infallable. As an added bonus, if SSL is in a separate process it won't bring the whole server down if it fails, if that's the kind of stuff you're worried about. 0K -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From kacperw at gmail.com Mon Mar 14 12:21:03 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:21:03 +0100 Subject: VCL BNF In-Reply-To: <69458.1300098188@critter.freebsd.dk> References: <69458.1300098188@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 11:23 AM, Poul-Henning Kamp wrote: > In message , > Kacper Wysocki writes: > >>The BNF might not be 100% complete yet - there might be bugs - so wiki >>is appropriate. kwy is my trac login. > > Agreed. > > You should have wiki bit now. http://www.varnish-cache.org/trac/wiki/VCL.BNF I put a link under Documentation. 0K From schmidt at ze.tum.de Mon Mar 14 13:00:23 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 13:00:23 +0100 Subject: SSL In-Reply-To: References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: <4D7E0357.4070204@ze.tum.de> Am 14.03.2011 12:05, schrieb Kacper Wysocki: > On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: >> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >>> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >>> >>>> stunnel has the disatwantage that we loose the clientIP information. >>> >>> Doesn't it set a header with this information ? >> >> It's a tunnel. It doesn't change the stream. As I said, we use pound because >> it sets the header. But its another daemon to run and to setup. Another >> component that could fail. Integrating SSL in varnish would reduce the >> complexity. > > What you meant to say is "integrating SSL in Varnish would increase > complexity". > Putting that component inside varnish doesn't automatically make it > infallable. As an added bonus, if SSL is in a separate process it > won't bring the whole server down if it fails, if that's the kind of > stuff you're worried about. It does kill your serive if your service is SSL based. Managing more config and more daemons always increses the complexity. More Daemons increse the probabilty of failure and increase the monitioring requirements. More Daemons increase the probailty of security problems. More Daemons increase the amount of time spend keepings the system up to date. It might increase the complexity of varnish but not the system a hole. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From perbu at varnish-software.com Mon Mar 14 13:10:41 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 13:10:41 +0100 Subject: SSL In-Reply-To: <4D7E0357.4070204@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:00 PM, Gerhard Schmidt wrote: > > It does kill your serive if your service is SSL based. > > Managing more config and more daemons always increses the complexity. > More Daemons increse the probabilty of failure and increase the monitioring > requirements. > More Daemons increase the probailty of security problems. > More Daemons increase the amount of time spend keepings the system up to > date. > First of all. Varnish is probably never getting SSL support built in so you can stop beating that horse. Also, in my opinion, it's easier to have two simple systems than one complex system. Having small dedicated programs is the beautiful design principle of Unix and as long as it won't influence performance I'm sold. IMO this is mostly a packaging issue. If we repackage stunnel as "varnish-ssl" and makes it "just work" it will be dead simple. It does however, put the pressure on us to maintain it, but that is minor. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 13:17:59 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 12:17:59 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 13:00:23 +0100." <4D7E0357.4070204@ze.tum.de> Message-ID: <54017.1300105079@critter.freebsd.dk> In message <4D7E0357.4070204 at ze.tum.de>, Gerhard Schmidt writes: >Managing more config and more daemons always increses the complexity. >More Daemons increse the probabilty of failure and increase the monitioring >requirements. >More Daemons increase the probailty of security problems. >More Daemons increase the amount of time spend keepings the system up to date. > >It might increase the complexity of varnish but not the system a hole. I can absolute guarantee you, that there would be no relevant difference in complexity, because the only way we can realistically add SSL to varnish is to start another daemon process to do it. Adding that complexity to Varnish will decrese the overall security relative to having the SSL daemon be an self-contained piece of software, simply as a matter of code complexity. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 14:06:15 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 14:06:15 +0100 Subject: SSL In-Reply-To: <4D7E10F9.1040904@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> <4D7E10F9.1040904@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:58 PM, Gerhard Schmidt wrote: > > > Also, in my opinion, it's easier to have two simple systems than one > complex > > system. Having small dedicated programs is the beautiful design principle > of > > Unix and as long as it won't influence performance I'm sold. > > If there was a way to use simple dedicated service without loosing > information > this would be correct. But there isn't a simple daemon to accept ssl > connections for varnish without loosing the Client Information. > You didn't read the whole thread, did you? You obviously don't know about the PROXY protocol mode of the patched stunnel version we're talking about. It requires slight modifications of Varnish and would transmit client.ip initially when talking with Varnish. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.chiswell at mangahigh.com Mon Mar 14 18:02:16 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:02:16 +0000 Subject: VCL Formatting Message-ID: <4D7E4A18.3030701@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From richard.chiswell at mangahigh.com Mon Mar 14 18:04:28 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:04:28 +0000 Subject: VCL Formatting Message-ID: <4D7E4A9C.6020907@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From perbu at varnish-software.com Mon Mar 14 18:33:14 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 18:33:14 +0100 Subject: VCL Formatting In-Reply-To: <4D7E4A18.3030701@mangahigh.com> References: <4D7E4A18.3030701@mangahigh.com> Message-ID: Hi, On Mon, Mar 14, 2011 at 6:02 PM, Richard Chiswell < richard.chiswell at mangahigh.com> wrote: > Hi, > > Does any know of, or have written, a code formatter for Varnish's VCL files > which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? > I use c-mode in Emacs - works ok for my somewhat limited needs. There probably is some codematting stuff for C you can use. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Mon Mar 14 23:30:14 2011 From: checker at d6.com (Chris Hecker) Date: Mon, 14 Mar 2011 15:30:14 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7E96F6.4060707@d6.com> Anybody have any ideas? They're not all the same mime type, so I think putting them in an uncached dir is better if there's no way to figure it out in vcl. Chris On 2011/03/13 07:26, AD wrote: > i dont think you can check the body size (at least it seems that way > with the existing req.* objects ). If you know the mime-type of the > file you might just be able to pipe the mime type if that works for all > file sizes ? > > I wonder if there is a way to pass the req object into some inline C > that can access the body somehow? > > On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > wrote: > > > I have a 400mb file that I just want apache to serve. What's the > best way to do this? I can put it in a directory and tell varnish > not to cache stuff that matches that dir, but I'd rather just make a > general rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From moseleymark at gmail.com Mon Mar 14 23:51:19 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 14 Mar 2011 15:51:19 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7E96F6.4060707@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > Anybody have any ideas? ?They're not all the same mime type, so I think > putting them in an uncached dir is better if there's no way to figure it out > in vcl. > > Chris > > > > On 2011/03/13 07:26, AD wrote: >> >> i dont think you can check the body size (at least it seems that way >> with the existing req.* objects ). ?If you know the mime-type of the >> file you might just be able to pipe the mime type if that works for all >> file sizes ? >> >> I wonder if there is a way to pass the req object into some inline C >> that can access the body somehow? >> >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > > wrote: >> >> >> ? ?I have a 400mb file that I just want apache to serve. ?What's the >> ? ?best way to do this? ?I can put it in a directory and tell varnish >> ? ?not to cache stuff that matches that dir, but I'd rather just make a >> ? ?general rule that varnish should ignore >=20mb files or whatever. >> >> ? ?Thanks, >> ? ?Chris I was asking about the same thing in this thread: http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 Check out Tollef's suggestion towards the end. That's what I've been using. The one drawback is that it's still fetched by varnish *completely* in the first, not-yet-restarted request, which means that a) you're fetching it twice; and b) it'll still stored albeit momentarily, so it'll evict stuff if there's not enough room. Before that, I wasn't sending any reqs for anything matching stuff like .avi or .wmv to varnish (from an nginx frontend). It'd be kind of neat if you could do a call-out and for anything matching a likely large file (i.e. has extension matching .avi, .wmv, etc), and do a HEAD request to determine the response size (or whatever you wanted to look for) before doing the GET. From straightflush at gmail.com Tue Mar 15 02:48:08 2011 From: straightflush at gmail.com (AD) Date: Mon, 14 Mar 2011 21:48:08 -0400 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: whats interesting is the last comment All this happens over localhost, so it's quite fast, but in the | interest of efficiency, is there something I can set or call so that | it closes that first connection almost immediately? Having to refetch | a 800meg file off of NFS might hurt -- even if a good chunk of it is | still in the OS block cache. You'd need to do this using inline C, but yes, anything is possible. (Sorry, I don't have an example for it here) What do you need to do via inline C to prevent the full 800 MB from being downloaded even the first time? On Mon, Mar 14, 2011 at 6:51 PM, Mark Moseley wrote: > On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > > > Anybody have any ideas? They're not all the same mime type, so I think > > putting them in an uncached dir is better if there's no way to figure it > out > > in vcl. > > > > Chris > > > > > > > > On 2011/03/13 07:26, AD wrote: > >> > >> i dont think you can check the body size (at least it seems that way > >> with the existing req.* objects ). If you know the mime-type of the > >> file you might just be able to pipe the mime type if that works for all > >> file sizes ? > >> > >> I wonder if there is a way to pass the req object into some inline C > >> that can access the body somehow? > >> > >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker >> > wrote: > >> > >> > >> I have a 400mb file that I just want apache to serve. What's the > >> best way to do this? I can put it in a directory and tell varnish > >> not to cache stuff that matches that dir, but I'd rather just make a > >> general rule that varnish should ignore >=20mb files or whatever. > >> > >> Thanks, > >> Chris > > > I was asking about the same thing in this thread: > > http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 > > Check out Tollef's suggestion towards the end. That's what I've been > using. The one drawback is that it's still fetched by varnish > *completely* in the first, not-yet-restarted request, which means that > a) you're fetching it twice; and b) it'll still stored albeit > momentarily, so it'll evict stuff if there's not enough room. > > Before that, I wasn't sending any reqs for anything matching stuff > like .avi or .wmv to varnish (from an nginx frontend). > > It'd be kind of neat if you could do a call-out and for anything > matching a likely large file (i.e. has extension matching .avi, .wmv, > etc), and do a HEAD request to determine the response size (or > whatever you wanted to look for) before doing the GET. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Tue Mar 15 08:42:46 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 00:42:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F17D5.2090002@bizztravel.nl> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> Message-ID: <4D7F1876.7080809@d6.com> Yeah, I think if I can't do it Right (which I define as checking the file size in the vcl), then I'm just going to make blah.com/uncached/* be uncached. I don't want to transfer it once just to throw it away. Chris On 2011/03/15 00:40, Martin Boer wrote: > I've been reading this discussion and imho the most elegant way to do it > is to have a upload directory X and 2 download directories Y and Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> I have a 400mb file that I just want apache to serve. What's the best >> way to do this? I can put it in a directory and tell varnish not to >> cache stuff that matches that dir, but I'd rather just make a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > From martin.boer at bizztravel.nl Tue Mar 15 08:40:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 15 Mar 2011 08:40:05 +0100 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7F17D5.2090002@bizztravel.nl> I've been reading this discussion and imho the most elegant way to do it is to have a upload directory X and 2 download directories Y and Z with a script in between that decides whether it's cacheable and move the file to Y or uncacheable and put it in Z. All the other solutions mentioned in between are far more intelligent and much more likely to backfire in some way or another. Just my 2 cents. Martin On 03/13/2011 05:28 AM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best > way to do this? I can put it in a directory and tell varnish not to > cache stuff that matches that dir, but I'd rather just make a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From perbu at varnish-software.com Tue Mar 15 10:46:16 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 15 Mar 2011 10:46:16 +0100 Subject: Online training Message-ID: Hi List. I promise I won't do this to often but I wanted to make you aware that we (Varnish Software) will now be offering online training. We have free seats in the upcoming session on the 24th and 25th of March (targeted mainly towards European time zones). We'll have sessions for US timezones in April. We're also planning a session for NZ and Aussies, but no date is set for this session yet. If your interested please drop me a mail. All our training is conducted by varnish cache committers. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:50:30 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:50:30 -0300 Subject: Online training In-Reply-To: References: Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:51:13 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:51:13 -0300 Subject: VarnishLog: Broken pipe (Debug) Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Tue Mar 15 17:44:26 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 15 Mar 2011 09:44:26 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, Mar 15, 2011 at 12:42 AM, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. ?I don't want to transfer it once just to throw it away. > > Chris > > > On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris Yeah, if you have control over directory names, that's by far the better way to go. I've got shared hosting customers behind mine, so I've got practically no control over where they put stuff under their webroot. From kbrownfield at google.com Tue Mar 15 21:16:11 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 13:16:11 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I'm assuming that in this case it's not possible for you to have the backend server emit an appropriate Cache-Control or Expires header based on the size of the file? The server itself will know the file size before transmission, and the reindeer caching games would not be necessary. ;-) That's definitely the Right Way, but it would require control over the backend, which is often not possible. Apache unfortunately doesn't have a built-in mechanism/module to emit a header based on file size, at least that I can find. :-( -- kb On Tue, Mar 15, 2011 at 00:42, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. I don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >>> >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From do.not.eat.yellow.snow at gmail.com Tue Mar 15 21:30:02 2011 From: do.not.eat.yellow.snow at gmail.com (Martin Strand) Date: Tue, 15 Mar 2011 21:30:02 +0100 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield wrote: > > Apache unfortunately doesn't have a built-in mechanism/module to emit a > header based on file size What about the "Content-Length" header? Apache seems to emit that automatically. From kbrownfield at google.com Tue Mar 15 22:59:49 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 14:59:49 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I think mod_headers/SetEnvIf/etc is applied at request time, before processing occurs (the parameters they have available to them are quite limited). But there may be a way to do later in the chain, and certainly with a custom mod. -- kb On Tue, Mar 15, 2011 at 13:30, Martin Strand < do.not.eat.yellow.snow at gmail.com> wrote: > On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield > wrote: > >> >> Apache unfortunately doesn't have a built-in mechanism/module to emit a >> header based on file size >> > > What about the "Content-Length" header? Apache seems to emit that > automatically. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Wed Mar 16 00:56:37 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 16:56:37 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: <4D7FFCB5.6030105@d6.com> I'm not sure I understand. I have control over the back end, the front end, the middle end, all the ends. However, I thought the problem was there was no way to get varnish to read the header without loading the file into the cache? If that's not true, then shouldn't Content-Length be enough? Chris On 2011/03/15 13:16, Ken Brownfield wrote: > I'm assuming that in this case it's not possible for you to have the > backend server emit an appropriate Cache-Control or Expires header based > on the size of the file? The server itself will know the file size > before transmission, and the reindeer caching games would not be > necessary. ;-) > > That's definitely the Right Way, but it would require control over the > backend, which is often not possible. Apache unfortunately doesn't have > a built-in mechanism/module to emit a header based on file size, at > least that I can find. :-( > -- > kb > > > > On Tue, Mar 15, 2011 at 00:42, Chris Hecker > wrote: > > > Yeah, I think if I can't do it Right (which I define as checking the > file size in the vcl), then I'm just going to make > blah.com/uncached/* be uncached. I > don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > > I've been reading this discussion and imho the most elegant way > to do it > is to have a upload directory X and 2 download directories Y and > Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more > intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: > > > I have a 400mb file that I just want apache to serve. What's > the best > way to do this? I can put it in a directory and tell varnish > not to > cache stuff that matches that dir, but I'd rather just make > a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Wed Mar 16 03:45:46 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 19:45:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7FFCB5.6030105@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> <4D7FFCB5.6030105@d6.com> Message-ID: If you have control over the backend (Apache) it should be made to emit a Cache-Control or Expires header to Varnish to make the object non-cacheable *if the file is too large*. Apache will know the file's size before a request occurs. I was talking about logic within Apache, not Varnish. This is how it's "supposed" to happen. With Varnish, I see no way to avoid downloading the entire file every time. You can control whether the file *stays* in cache, but that's it. If there were a URL pattern (e.g., magic subdirectory), you could conceivably switch to pipe in those cases. Thinking out loud... HTTP servers will send a response to a HEAD request with a Content-Length header that represents the length of the full object had a GET been performed. If your Apache does this (some configurations will disable this), one hack would be to have Varnish send a HEAD request to Apache for every object, set a req flag if the returned content length is too large, then restart, and then have logic that will force pipe if it's too large, otherwise pass. This will double the hits to the back-end, however, so some conditionals would help (only .mov, or only a certain subdirectory, etc.) And I've never tried changing a GET to a HEAD with VCL or inline-C. But usually when something is that difficult, it's a square peg and a round hole. :-) FWIW, -- kb On Tue, Mar 15, 2011 at 16:56, Chris Hecker wrote: > > I'm not sure I understand. I have control over the back end, the front > end, the middle end, all the ends. However, I thought the problem was there > was no way to get varnish to read the header without loading the file into > the cache? If that's not true, then shouldn't Content-Length be enough? > > Chris > > On 2011/03/15 13:16, Ken Brownfield wrote: > >> I'm assuming that in this case it's not possible for you to have the >> backend server emit an appropriate Cache-Control or Expires header based >> on the size of the file? The server itself will know the file size >> before transmission, and the reindeer caching games would not be >> necessary. ;-) >> >> That's definitely the Right Way, but it would require control over the >> backend, which is often not possible. Apache unfortunately doesn't have >> a built-in mechanism/module to emit a header based on file size, at >> least that I can find. :-( >> -- >> kb >> >> >> >> On Tue, Mar 15, 2011 at 00:42, Chris Hecker > > wrote: >> >> >> Yeah, I think if I can't do it Right (which I define as checking the >> file size in the vcl), then I'm just going to make >> blah.com/uncached/* be uncached. I >> don't want to transfer it once just to throw it away. >> >> Chris >> >> >> >> On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way >> to do it >> is to have a upload directory X and 2 download directories Y and >> Z with >> a script in between that decides whether it's cacheable and move >> the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more >> intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> >> I have a 400mb file that I just want apache to serve. What's >> the best >> way to do this? I can put it in a directory and tell varnish >> not to >> cache stuff that matches that dir, but I'd rather just make >> a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Wed Mar 16 14:58:39 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 09:58:39 -0400 Subject: Session issues when using Varnish Message-ID: I have been investigating an issue on a client's website that is very peculiar. I have verified that the behavior is due to the instance of Varnish that Rackspace configured for us. However, I'm not sure if this constitutes a bug in Varnish or a configuration error. I'm hoping someone can verify it for me one way or the other. Here is the scenario: Some of our PHP pages are protected by way of verifying that certain session variables are set. If not, the user is sent to the login page. We have observed that on URLs in which there is a querystring, and when the last value of that querystring ends in ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule that routes requests from port 80 to Varnish, the session is reset completely. Oddly enough, no other extension seems to have this affect. I have recreated this behavior in a clean PHP file, which I've attached. You can test this script on your own using the following URLs. The ones marked with the * are where the session gets reset. http://localhost/test_cdb.php http://localhost/test_cdb.php?foo=1 http://localhost/test_cdb.php?foo=1&baz=bix http://localhost/test_cdb.php?foo=1&baz=bix.far http://localhost/test_cdb.php?foo=1&baz=bix.far.jpg * http://localhost/test_cdb.php?foo=1&baz=bix.fur http://localhost/test_cdb.php?foo=1&baz=bix.gif * http://localhost/test_cdb.php?foo=1&baz=bix.bmp http://localhost/test_cdb.php?foo=1&baz=bix.php http://localhost/test_cdb.php?foo=1&baz=bix.exe http://localhost/test_cdb.php?foo=1&baz=bix.tar http://localhost/test_cdb.php?foo=1&baz=bix.jpeg * Here is the rule we created for iptables -A PREROUTING -t nat -d x.x.x.128 -p tcp -m tcp --dport 80 -j DNAT --to-destination x.x.x.128:6081 Chris Bloom Internet Application Developer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_cdb.php Type: application/x-httpd-php Size: 721 bytes Desc: not available URL: From bjorn at ruberg.no Wed Mar 16 15:15:04 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 15:15:04 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80C5E8.8040503@ruberg.no> On 03/16/2011 02:58 PM, Chris Bloom wrote: > I have been investigating an issue on a client's website that is very > peculiar. I have verified that the behavior is due to the instance of > Varnish that Rackspace configured for us. However, I'm not sure if this > constitutes a bug in Varnish or a configuration error. I'm hoping > someone can verify it for me one way or the other. > > Here is the scenario: Some of our PHP pages are protected by way of > verifying that certain session variables are set. If not, the user is > sent to the login page. We have observed that on URLs in which there is > a querystring, and when the last value of that querystring ends in > ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule > that routes requests from port 80 to Varnish, the session is reset > completely. Oddly enough, no other extension seems to have this affect. This *looks* like some general Varnish rule that removes any (session) cookies when the URL (including the query string) ends with jpg, jpeg etc. However, since you did not include the Varnish configuration or Varnish logs, you will only receive guesswork. Your test file is of absolutely no value as long as you didn't a) provide the real URL for remote diagnosis and/or b) the VCL for local testing. Without any information on the Varnish configuration, further requests for assistance should be directed to your provider. You need someone with access to the VCL to be able to confirm your issue. The symptoms should be sufficiently descriptive, as long as they reach someone who can do anything about it. We can't. Good luck, -- Bj?rn From chrisbloom7 at gmail.com Wed Mar 16 16:55:56 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 11:55:56 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: Thank you, Bjorn, for your response. Our hosting provider tells me that the following routines have been added to the default config. sub vcl_recv { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.cookie; return (lookup); } } sub vcl_fetch { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.set-cookie; set obj.ttl = 1h; } } Clearly the req.url variable contains the entire request URL, including the querystring. Is there another variable that I should be using instead that would only include the script name? If this is the default behavior, I'm inclined to cry "bug". You can test that other script for yourself by substituting maxisavergroup.com for the domain in the example URLs I provided. PS: We are using Varnish 2.0.6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Wed Mar 16 17:48:47 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 16 Mar 2011 13:48:47 -0300 Subject: Limited urls Message-ID: Hi guys, I am trying to restrict some access to my Varnish. I want to accept only requests for domain1.com and domain2.com, but deny access to server's IP address. This is my vcl_recv: if (req.http.host ~ ".*domain1.*") { set req.backend = domain1; } elseif (req.http.host ~ ".*domain2.*") { set req.backend = domain2; } else { error 405 "Sorry!"; } Am I doing the right way? Do you have any suggestion? Thank you, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Wed Mar 16 19:03:52 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 19:03:52 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80FB88.5030907@ruberg.no> On 03/16/2011 04:55 PM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } This is a rather standard config, not designed for corner cases like yours. > Clearly the req.url variable contains the entire request URL, including > the querystring. Is there another variable that I should be using > instead that would only include the script name? If this is the default > behavior, I'm inclined to cry "bug". You can start crying bug after you've convinced the rest of the Internet world, including all browsers, that the query string should not be considered part of the URL. In the meantime, I suggest you let your provider know that your application has special requirements that they will need to accommodate. Your provider can't offer proper service when they don't know your requirements. To provide you with a useful Varnish configuration, your provider needs to know quite a few things about how your application works. This includes any knowledge of cookies and when Varnish should and should not allow them. Since you ask the Varnish community instead of discussing this with your provider, I guess these requirements were never communicated. A few tips you and your provider can consider: a) Perhaps a second cookie could be set by the backend application for logged-in users. A configuration could be made so that Varnish would choose to not remove cookies from the file suffixes listed if this cookie was present. b) If the path(s)/filename(s) where the query string may include the mentioned file suffixes are identifiable, your provider could create an exception for those. E.g. if ?foo=bar.jpg only occurs with /some/test/file.php, then the if clause in vcl_recv could take that into consideration. c) Regular expressions in 2.0.6 are case insensitive, so listing both "jpg" and "JPG" in the same expression is unnecessary. - Bj?rn From davidpetzel at gmail.com Wed Mar 16 19:21:21 2011 From: davidpetzel at gmail.com (David Petzel) Date: Wed, 16 Mar 2011 14:21:21 -0400 Subject: Question on Re-Using Backend Probes Message-ID: I'm really new to varnish, so please forgive me if this answered elsewhere, I did some searching and couldn't seem to find it however. I was reviewing the documention and I have a question about back end probes. I'm setting up a directory that will have 10-12 backends. I want each backend to use the same health check, but I don't want to have to re-define the prove 10-12 times. Is it possible to define the probe externally to the backend configuration, and then reference it. Something like the following? probe myProbe1 { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } backend server1 { .host = "server1.example.com"; .probe = myProbe1 } backend server2 { .host = "server2.example.com"; .probe = myProbe1 } All of the examples I've come across have the probe redefined again. for example on http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html#health-checks They show the following example which feels redundant. backend server1 { .host = "server1.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } backend server2 { .host = "server2.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:29:10 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:29:10 +0100 Subject: Question on Re-Using Backend Probes In-Reply-To: References: Message-ID: Hi David. On Wed, Mar 16, 2011 at 7:21 PM, David Petzel wrote: > I'm really new to varnish, so please forgive me if this answered elsewhere, > I did some searching and couldn't seem to find it however. > I was reviewing the documention and I have a question about back end probes. > I'm setting up a directory that will have 10-12 backends. I want each > backend to use the same health check, but I don't want to have to re-define > the prove 10-12 times. Is it possible to define the probe externally to the > backend configuration, and then reference it. No. That is not possible. However, you could use a makro language of sorts to pre-process the configuration. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From dhelkowski at sbgnet.com Wed Mar 16 19:51:35 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 14:51:35 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D8106B7.5030604@sbgnet.com> The vcl you are showing may be standard, but as you have noticed it will not work properly when query strings end in a file extension. I encountered this same problem after blindly copying from example varnish configurations. Before the check is done, the query parameter needs to be stripped from the url. Example of an alternate way to check the extensions: sub vcl_recv { ... set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } ... } Doubtless others will say this approach is wrong for some reason or another. I use it in a production environment and it works fine though. Pass it along to your hosting provider and request that they consider changing their config. Note that the above code will cause the end user to receive a 'ext' header with the file extension. You can add a 'remove req.http.ext' after the code if you don't want that to happen... Another thing to consider is that whether it this is a bug or not; it is a common problem with varnish configurations, and as such can be used on most varnish servers to force them to return things differently then they normally would. IE: if some backend script is a huge request and eats up resources, sending it a '?.jpg' could be used to hit it repeatedly and bring about a denial of service. On 3/16/2011 11:55 AM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > > Clearly the req.url variable contains the entire request URL, > including the querystring. Is there another variable that I should be > using instead that would only include the script name? If this is the > default behavior, I'm inclined to cry "bug". > > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the > example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:59:02 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:59:02 +0100 Subject: Session issues when using Varnish In-Reply-To: <4D8106B7.5030604@sbgnet.com> References: <4D8106B7.5030604@sbgnet.com> Message-ID: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ??? ... > ??? set req.http.ext = regsub( req.url, "\?.+$", "" ); > ??? set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > ??? if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > ????? return(lookup); > ??? } > ??? ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.cookie; > ?? ?return (lookup); > ??} > } > sub vcl_fetch { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.set-cookie; > ?? ?set obj.ttl = 1h; > ??} > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Wed Mar 16 20:30:31 2011 From: straightflush at gmail.com (AD) Date: Wed, 16 Mar 2011 15:30:31 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { * remove req.http.ext; * return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > Hi David, List. > > I think I'll use this snipplet in the documentation if you don't mind. > I need to work in more regsub calls there anyway. > > Cheers, > > Per. > > On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski > wrote: > > The vcl you are showing may be standard, but as you have noticed it will > not > > work properly when > > query strings end in a file extension. I encountered this same problem > after > > blindly copying from > > example varnish configurations. > > Before the check is done, the query parameter needs to be stripped from > the > > url. > > Example of an alternate way to check the extensions: > > > > sub vcl_recv { > > ... > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > > if( req.http.ext ~ > > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > > return(lookup); > > } > > ... > > } > > > > Doubtless others will say this approach is wrong for some reason or > another. > > I use it in a production > > environment and it works fine though. Pass it along to your hosting > provider > > and request that they > > consider changing their config. > > > > Note that the above code will cause the end user to receive a 'ext' > header > > with the file extension. > > You can add a 'remove req.http.ext' after the code if you don't want that > to > > happen... > > > > Another thing to consider is that whether it this is a bug or not; it is > a > > common problem with varnish > > configurations, and as such can be used on most varnish servers to force > > them to return things > > differently then they normally would. IE: if some backend script is a > huge > > request and eats up resources, sending > > it a '?.jpg' could be used to hit it repeatedly and bring about a denial > of > > service. > > > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > > > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been added > to > > the default config. > > sub vcl_recv { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.cookie; > > return (lookup); > > } > > } > > sub vcl_fetch { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.set-cookie; > > set obj.ttl = 1h; > > } > > } > > Clearly the req.url variable contains the entire request URL, including > the > > querystring. Is there another variable that I should be using instead > that > > would only include the script name? If this is the default behavior, I'm > > inclined to cry "bug". > > You can test that other script for yourself by substituting > > maxisavergroup.com for the domain in the example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 16 23:58:31 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Wed, 16 Mar 2011 15:58:31 -0700 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD wrote: > You can remove the header so it doesnt get set > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > * remove req.http.ext; * > return(lookup); > } > > > > On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > >> Hi David, List. >> >> I think I'll use this snipplet in the documentation if you don't mind. >> I need to work in more regsub calls there anyway. >> >> Cheers, >> >> Per. >> >> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >> wrote: >> > The vcl you are showing may be standard, but as you have noticed it will >> not >> > work properly when >> > query strings end in a file extension. I encountered this same problem >> after >> > blindly copying from >> > example varnish configurations. >> > Before the check is done, the query parameter needs to be stripped from >> the >> > url. >> > Example of an alternate way to check the extensions: >> > >> > sub vcl_recv { >> > ... >> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> > if( req.http.ext ~ >> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> > return(lookup); >> > } >> > ... >> > } >> > >> > Doubtless others will say this approach is wrong for some reason or >> another. >> > I use it in a production >> > environment and it works fine though. Pass it along to your hosting >> provider >> > and request that they >> > consider changing their config. >> > >> > Note that the above code will cause the end user to receive a 'ext' >> header >> > with the file extension. >> > You can add a 'remove req.http.ext' after the code if you don't want >> that to >> > happen... >> > >> > Another thing to consider is that whether it this is a bug or not; it is >> a >> > common problem with varnish >> > configurations, and as such can be used on most varnish servers to force >> > them to return things >> > differently then they normally would. IE: if some backend script is a >> huge >> > request and eats up resources, sending >> > it a '?.jpg' could be used to hit it repeatedly and bring about a denial >> of >> > service. >> > >> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >> > >> > Thank you, Bjorn, for your response. >> > Our hosting provider tells me that the following routines have been >> added to >> > the default config. >> > sub vcl_recv { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.cookie; >> > return (lookup); >> > } >> > } >> > sub vcl_fetch { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.set-cookie; >> > set obj.ttl = 1h; >> > } >> > } >> > Clearly the req.url variable contains the entire request URL, including >> the >> > querystring. Is there another variable that I should be using instead >> that >> > would only include the script name? If this is the default behavior, I'm >> > inclined to cry "bug". >> > You can test that other script for yourself by substituting >> > maxisavergroup.com for the domain in the example URLs I provided. >> > PS: We are using Varnish 2.0.6 >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> >> >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoiz.shine at gmail.com Thu Mar 17 03:34:55 2011 From: amoiz.shine at gmail.com (Sharl.Jimh.Tsin) Date: Thu, 17 Mar 2011 10:34:55 +0800 Subject: Limited urls In-Reply-To: References: Message-ID: yes,it is right. Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) 2011/3/17 Roberto O. Fern?ndez Crisial : > Hi guys, > I am trying to?restrict?some access to my Varnish. I?want?to accept only > requests for?domain1.com and?domain2.com, but deny access to server's IP > address. This is my vcl_recv: > if (req.http.host ~ ".*domain1.*") > { > > set req.backend = domain1; > > } > elseif (req.http.host ~ ".*domain2.*") > { > > set req.backend = domain2; > > } > else > { > > error 405 "Sorry!"; > > } > Am I doing the right way? Do you have any suggestion? > Thank you, > Roberto. > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From dhelkowski at sbgnet.com Thu Mar 17 03:40:16 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 22:40:16 -0400 (EDT) Subject: Session issues when using Varnish In-Reply-To: <762709593.897379.1300329124628.JavaMail.root@mail-01.sbgnet.com> Message-ID: <1185929555.897407.1300329616885.JavaMail.root@mail-01.sbgnet.com> I agree that this is a better expression to use if you are only testing one set of extensions and don't intend to do anything else with the extension itself. Using the same method: ( if you want to capture the extension for some reason ) set req.http.ext = regsub( req.url, "^[^\?]*?\.([a-zA-Z]+)($|\?)", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } I also have not tested this; but it should work assuming the other example works. ----- Original Message ----- From: "Ken Brownfield" To: varnish-misc at varnish-cache.org Sent: Wednesday, March 16, 2011 6:58:31 PM Subject: Re: Session issues when using Varnish Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD < straightflush at gmail.com > wrote: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { remove req.http.ext; return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer < perbu at varnish-software.com > wrote: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski < dhelkowski at sbgnet.com > wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ... > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > return(lookup); > } > ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From weipeng.pengw at alibaba-inc.com Thu Mar 17 04:01:01 2011 From: weipeng.pengw at alibaba-inc.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 17 Mar 2011 11:01:01 +0800 Subject: ESI problem in Red Hat Enterprise Linux Message-ID: hi all: i install varnish using the source code "varnish-2.1.4.tar.gz" in ubuntu10.4 and "Red Hat Enterprise Linux Server release 5.3 (Tikanga)" when i use ESI in ubuntu, it's ok, both the main page and the esi included page can be showed but the same configure file and the same pages in redhat, only the main page can be showed the configure file is as below: backend default { .host = "127.0.0.1"; .port = "80"; } backend javaeye { .host = "www.javaeye.com"; .port = "80"; .connect_timeout = 1s; .first_byte_timeout = 5s; .between_bytes_timeout = 2s; } acl purge { "localhost"; "127.0.0.1"; "192.168.1.0"/24; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/forums/") { set req.backend = javaeye; set req.http.Host="www.javaeye.com"; return (pass); } else { set req.backend = default; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { return (pass); } return (lookup); } sub vcl_pass { return (pass); } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } if (!obj.cacheable) { return (pass); } return (deliver); } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } return (fetch); } sub vcl_fetch { if (req.url ~ "/[a-z0-9]+.html$") { esi; /* Do ESI processing */ } remove beresp.http.Last-Modified; remove beresp.http.Etag; #set beresp.http.Cache-Control="no-cache"; if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } if (req.url ~ "^/[a-z]+/") { /* We only deal with GET and HEAD by default */ return (pass); } return (deliver); } sub vcl_deliver { return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

Guru Meditation:

XID: "} req.xid {"


Varnish cache server

"}; return (deliver); } the main page url: http://10.20.156.7:8000/haha.html the main page content: 123haha111 please help me! thanks ! Regards! pwlazy ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Thu Mar 17 16:59:59 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Thu, 17 Mar 2011 11:59:59 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: FYI - I forwarded Ken's suggested solution to our Rackspace tech who updated our config. This appears to have resolved our issue. Thanks! Chris Bloom Internet Application Developer On Wed, Mar 16, 2011 at 6:58 PM, Ken Brownfield wrote: > Or not set a header at all: > > if ( req.url ~ > "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { > unset req.http.cookie > return(lookup); > } > > Didn't test the regex with Varnish's regex handling. > -- > kb > > > > On Wed, Mar 16, 2011 at 12:30, AD wrote: > >> You can remove the header so it doesnt get set >> >> set req.http.ext = regsub( req.url, "\?.+$", "" ); >> set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> if( req.http.ext ~ >> "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> * remove req.http.ext; * >> return(lookup); >> } >> >> >> >> On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: >> >>> Hi David, List. >>> >>> I think I'll use this snipplet in the documentation if you don't mind. >>> I need to work in more regsub calls there anyway. >>> >>> Cheers, >>> >>> Per. >>> >>> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >>> wrote: >>> > The vcl you are showing may be standard, but as you have noticed it >>> will not >>> > work properly when >>> > query strings end in a file extension. I encountered this same problem >>> after >>> > blindly copying from >>> > example varnish configurations. >>> > Before the check is done, the query parameter needs to be stripped from >>> the >>> > url. >>> > Example of an alternate way to check the extensions: >>> > >>> > sub vcl_recv { >>> > ... >>> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >>> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" >>> ); >>> > if( req.http.ext ~ >>> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >>> > return(lookup); >>> > } >>> > ... >>> > } >>> > >>> > Doubtless others will say this approach is wrong for some reason or >>> another. >>> > I use it in a production >>> > environment and it works fine though. Pass it along to your hosting >>> provider >>> > and request that they >>> > consider changing their config. >>> > >>> > Note that the above code will cause the end user to receive a 'ext' >>> header >>> > with the file extension. >>> > You can add a 'remove req.http.ext' after the code if you don't want >>> that to >>> > happen... >>> > >>> > Another thing to consider is that whether it this is a bug or not; it >>> is a >>> > common problem with varnish >>> > configurations, and as such can be used on most varnish servers to >>> force >>> > them to return things >>> > differently then they normally would. IE: if some backend script is a >>> huge >>> > request and eats up resources, sending >>> > it a '?.jpg' could be used to hit it repeatedly and bring about a >>> denial of >>> > service. >>> > >>> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >>> > >>> > Thank you, Bjorn, for your response. >>> > Our hosting provider tells me that the following routines have been >>> added to >>> > the default config. >>> > sub vcl_recv { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.cookie; >>> > return (lookup); >>> > } >>> > } >>> > sub vcl_fetch { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.set-cookie; >>> > set obj.ttl = 1h; >>> > } >>> > } >>> > Clearly the req.url variable contains the entire request URL, including >>> the >>> > querystring. Is there another variable that I should be using instead >>> that >>> > would only include the script name? If this is the default behavior, >>> I'm >>> > inclined to cry "bug". >>> > You can test that other script for yourself by substituting >>> > maxisavergroup.com for the domain in the example URLs I provided. >>> > PS: We are using Varnish 2.0.6 >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> >>> >>> >>> -- >>> Per Buer, Varnish Software >>> Phone: <%2B47%2021%2098%2092%2061>+47 21 98 92 61 / Mobile: >>> <%2B47%20958%2039%20117>+47 958 39 117 / Skype: per.buer >>> Varnish makes websites fly! >>> Want to learn more about Varnish? >>> http://www.varnish-software.com/whitepapers >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.begumisa at gmail.com Fri Mar 18 02:24:19 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Thu, 17 Mar 2011 18:24:19 -0700 Subject: Request body of POST Message-ID: Is there anyway I can see the request body of a POST in the varnish logs generated from running the varnishlog command? Thanks. Best Regards, Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 18 09:22:46 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 18 Mar 2011 09:22:46 +0100 Subject: Request body of POST In-Reply-To: (Joseph Begumisa's message of "Thu, 17 Mar 2011 18:24:19 -0700") References: Message-ID: <87y64ckfc9.fsf@qurzaw.varnish-software.com> ]] Joseph Begumisa Hi, | Is there anyway I can see the request body of a POST in the varnish logs | generated from running the varnishlog command? Thanks. No. Use tcpdump or wireshark/tshark to get at that. regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From j.begumisa at gmail.com Fri Mar 18 18:16:54 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Fri, 18 Mar 2011 10:16:54 -0700 Subject: Request body of POST In-Reply-To: <87y64ckfc9.fsf@qurzaw.varnish-software.com> References: <87y64ckfc9.fsf@qurzaw.varnish-software.com> Message-ID: Thanks. Best Regards, Joseph On Fri, Mar 18, 2011 at 1:22 AM, Tollef Fog Heen < tfheen at varnish-software.com> wrote: > ]] Joseph Begumisa > > Hi, > > | Is there anyway I can see the request body of a POST in the varnish logs > | generated from running the varnishlog command? Thanks. > > No. > > Use tcpdump or wireshark/tshark to get at that. > > regards, > -- > Tollef Fog Heen > Varnish Software > t: +47 21 98 92 64 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Mar 20 22:12:32 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:12:32 +0000 Subject: Are varnish subroutines reentrant? Message-ID: Would I be correct in assuming that any subroutines not using inline C are reentrant? I'm talking about non-defaulted, site-specific subroutines here, not vcl_* ones, as I presume the question is possibly meaningless for the vcl_* set. Many thanks, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:26:58 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:26:58 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:12:32 GMT." Message-ID: <98274.1300656418@critter.freebsd.dk> In message , Jona than Matthews writes: >Would I be correct in assuming that any subroutines not using inline C >are reentrant? >I'm talking about non-defaulted, site-specific subroutines here, not >vcl_* ones, as I presume the question is possibly meaningless for the >vcl_* set. It would probably be a lot more easy to answer, if you told me the names of the subroutines you are interested in. In general, reentrancy is highly variable in Varnish. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:39:22 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:39:22 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <98274.1300656418@critter.freebsd.dk> References: <98274.1300656418@critter.freebsd.dk> Message-ID: On 20 March 2011 21:26, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: >>Would I be correct in assuming that any subroutines not using inline C >>are reentrant? >>I'm talking about non-defaulted, site-specific subroutines here, not >>vcl_* ones, as I presume the question is possibly meaningless for the >>vcl_* set. > > It would probably be a lot more easy to answer, if you told me the > names of the subroutines you are interested in. They're ones that I'm defining in my VCL. They're site-specific helper functions that don't exist in the default VCL. I'm not asking for an analysis of the reentrant nature of a specific algorithm or block of code, just to know if there's anything underlying the VCL at any specific points in the route through the standard subroutines that would make being reentrant more complex to deal with than solely making sure the algorithm is reentrant. If that makes sense :-) Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:43:49 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:43:49 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:39:22 GMT." Message-ID: <37820.1300657429@critter.freebsd.dk> In message , Jona than Matthews writes: >just to know if there's anything >underlying the VCL at any specific points in the route through the >standard subroutines that would make being reentrant more complex to >deal with than solely making sure the algorithm is reentrant. If that >makes sense :-) As long as you take care of the usual stuff, (static/global variables etc) there shouldn't be any issues. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:58:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:58:14 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <37820.1300657429@critter.freebsd.dk> References: <37820.1300657429@critter.freebsd.dk> Message-ID: On 20 March 2011 21:43, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: > >>just to know if there's anything >>underlying the VCL at any specific points in the route through the >>standard subroutines that would make being reentrant more complex to >>deal with than solely making sure the algorithm is reentrant. ?If that >>makes sense :-) > > As long as you take care of the usual stuff, (static/global variables > etc) there shouldn't be any issues. Many thanks. Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From krjeschke at omniti.com Fri Mar 18 20:18:10 2011 From: krjeschke at omniti.com (Katherine Jeschke) Date: Fri, 18 Mar 2011 15:18:10 -0400 Subject: Surge 2011 Conference CFP Message-ID: We are excited to announce Surge 2011, the Scalability and Performance Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses on case studies that demonstrate successes (and failures) in Web applications and Internet architectures. This year, we're adding Hack Day on September 28th. The inaugural, 2010 conference (http://omniti.com/surge/2010) was a smashing success and we are currently accepting submissions for papers through April 3rd. You can find more information about topics online: http://omniti.com/surge/2011 2010 attendees compared Surge to the early days of Velocity, and our speakers received 3.5-4 out of 4 stars for quality of presentation and quality of content! Nearly 90% of first-year attendees are planning to come again in 2011. For more information about the CFP or sponsorship of the event, please contact us at surge (AT) omniti (DOT) com. -- Katherine Jeschke Marketing Director OmniTI Computer Consulting, Inc. 7070 Samuel Morse Drive, Ste.150 Columbia, MD 21046 O: 410/872-4910, 222 C: 443/643-6140 omniti.com circonus.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Mar 21 16:08:45 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 21 Mar 2011 15:08:45 +0000 Subject: Warming the cache from an existing squid proxy instance Message-ID: Hi all - I've got some long-running squid instances, mainly used for caching medium-sized binaries, which I'd like to replace with some varnish instances. The binaries are quite heavy to regenerate on the distant origin servers and there's a large number of them. Hence, I'd like to use the squid cache as a target to warm a (new, nearby) varnish instance instead of just pointing the varnish instance at the remote origin servers. The squid instances are running in proxy mode, and require (I *believe*) an HTTP CONNECT. I've looked around for people trying the same thing, but haven't come across any success stories. I'm perfectly prepared to be told that I simply have to reconfigure the squid instances in mixed proxy/origin-server mode, and that there's no way around it, but I thought I'd ask the list for guidance first ... Any thoughts? Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From scott at dor.ky Mon Mar 21 22:10:09 2011 From: scott at dor.ky (Scott Wilcox) Date: Mon, 21 Mar 2011 21:10:09 +0000 Subject: Using Varnish with SSL Message-ID: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Hello folks, I've recently been looking at introducing Varnish into my current fronted system. From what I've seen and in my own testing, I've been very impressed with the performance gains. One question I do have, is about using SSL with Varnish. I'll be using Varnish to push over to an Apache server which runs on :80 and :443 at present, serving also identical content (if needed for simplicity, these can be merged). What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. Thoughts, comments and suggestions all most welcome! Scott. From perbu at varnish-software.com Mon Mar 21 22:37:42 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 21 Mar 2011 22:37:42 +0100 Subject: Using Varnish with SSL In-Reply-To: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: Hi Scott. On Mon, Mar 21, 2011 at 10:10 PM, Scott Wilcox wrote: > > What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. www.varnish-cache.org and www.varnish-software.com are running a hidden apache (w/PHP) behind Varnish. On port 443 there is a minimalistic nginx which does the SSL stuff and connects to Varnish. It works well. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Tue Mar 22 02:49:18 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 21:49:18 -0400 Subject: obj.ttl not available in vcl_deliver Message-ID: Hello, Per the docs it says that all the obj.* values should be available in vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get the following error: Variable 'obj.ttl' not accessible in method 'vcl_deliver'. This is on Ubuntu, Varnish 2.1.5. Any ideas ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Tue Mar 22 03:30:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Mon, 21 Mar 2011 19:30:24 -0700 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: Per lots of posts on this list, obj is now baresp in newer Varnish versions. It sounds like the documentation for this change hasn't been fully propagated. -- kb On Mon, Mar 21, 2011 at 18:49, AD wrote: > Hello, > > Per the docs it says that all the obj.* values should be available in > vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get > the following error: > > Variable 'obj.ttl' not accessible in method 'vcl_deliver'. > > This is on Ubuntu, Varnish 2.1.5. Any ideas ? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Tue Mar 22 03:39:50 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 22:39:50 -0400 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: hmm, it seems beresp.* is available in vcl_fetch, but not vcl_deliver. I need obj.ttl in vcl_deliver (to get the TTL as it is in the cache, not from the backend). On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > Per lots of posts on this list, obj is now baresp in newer Varnish > versions. It sounds like the documentation for this change hasn't been > fully propagated. > -- > kb > > > > On Mon, Mar 21, 2011 at 18:49, AD wrote: > >> Hello, >> >> Per the docs it says that all the obj.* values should be available in >> vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get >> the following error: >> >> Variable 'obj.ttl' not accessible in method 'vcl_deliver'. >> >> This is on Ubuntu, Varnish 2.1.5. Any ideas ? >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Tue Mar 22 10:01:45 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Tue, 22 Mar 2011 10:01:45 +0100 Subject: Using Varnish with SSL In-Reply-To: References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Hi Per, > > What I'd like to know is the best way to configure this (and if its possible > actually). I very much need to keep SSL access open, I realise that I could just > run apache 'native' on :443, but I'd be a lot happier if I can push it through > Varnish. > > www.varnish-cache.org and www.varnish-software.com are running a > hidden apache (w/PHP) behind Varnish. On port 443 there is a > minimalistic nginx which does the SSL stuff and connects to Varnish. > It works well. So you're routing all SSL (port 443) via Nginx- > to Varnish -> to Apache? Meaning your nginx is covering the SSL certificates, and your backend is only getting "normal" unencrypted hits? How does that translate to performance? Are you losing a lot by passing it all via nginx first? It's an interesting discussion, I'd love to hear more on the "best practice" implementation of this to get the most performance gain. Regards, Mattias From perbu at varnish-software.com Tue Mar 22 10:25:33 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 22 Mar 2011 10:25:33 +0100 Subject: Using Varnish with SSL In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Message-ID: On Tue, Mar 22, 2011 at 10:01 AM, Mattias Geniar wrote: >> www.varnish-cache.org and www.varnish-software.com are running a >> hidden apache (w/PHP) behind Varnish. On port 443 there is a >> minimalistic nginx which does the SSL stuff and connects to Varnish. >> It works well. > > So you're routing all SSL (port 443) via Nginx- > to Varnish -> to > Apache? Yes. Varnish on port 80 with a Apache backend at some other port on loopback. > Meaning your nginx is covering the SSL certificates, and your > backend is only getting "normal" unencrypted hits? Yes. > How does that translate to performance? Are you losing a lot by passing > it all via nginx first? Not really. There is some HTTP header processing that is unnecessary and that could have been saved if SSL was native in Varnish but all in all, with Varnish you usually have a lot of CPU to spare. I remember a couple of years back we where running the same stack and thousands of hits per second without any issues. > It's an interesting discussion, I'd love to hear more on the "best > practice" implementation of this to get the most performance gain. SSL used to be very expensive. It isn't anymore. There have been good advances in both hardware and software so SSL rather cheap. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From s.welschhoff at lvm.de Tue Mar 22 10:42:54 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Tue, 22 Mar 2011 10:42:54 +0100 Subject: Two Different Backends Message-ID: Hello, I want to configure varnish with two different backends. But with my configuration varnish can't handle with both. sub vcl_recv { if (req.url ~"^/partner/") { set req.backend = directory1; set req.http.host = "partnerservicesq00.xxx.de"; } if (req.url ~"^/schaden/") { set req.backend = directory2; set req.http.host = "servicesq00.xxx.de"; } else { set req.backend = default; } } When I take only the first server and comment the second out it works. But I want to have both. Kind regards Stefan Welschhoff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From varnish at mm.quex.org Tue Mar 22 10:54:17 2011 From: varnish at mm.quex.org (Michael Alger) Date: Tue, 22 Mar 2011 17:54:17 +0800 Subject: Two Different Backends In-Reply-To: References: Message-ID: <20110322095417.GA26096@grum.quex.org> On Tue, Mar 22, 2011 at 10:42:54AM +0100, Stefan Welschhoff wrote: > > I want to configure varnish with two different backends. But with > my configuration varnish can't handle with both. There is a logic error here: > if (req.url ~"^/partner/") > { > set req.backend = directory1; > set req.http.host = "partnerservicesq00.xxx.de"; > } The above if-clause will be run, and then, regardless of the outcome, the next if-else-clause will be run: > if (req.url ~"^/schaden/") > { > set req.backend = directory2; > set req.http.host = "servicesq00.xxx.de"; > } > else > { > set req.backend = default; > } This means that if the URL matched /partner/ the backend will get set to back to default, because it falls through to the "else". I think you want your second if for /schaden/ to be an elsif. if (req.url ~ "^/partner/") { } elsif (req.url ~ "^/schaden/") { } else { } If that's not the problem you're having, please provide some more information, i.e. backend configuration and error messages if any, or the expected and actual result. From cdgraff at gmail.com Wed Mar 23 04:04:51 2011 From: cdgraff at gmail.com (Alejandro) Date: Wed, 23 Mar 2011 00:04:51 -0300 Subject: VarnishLog: Broken pipe (Debug) In-Reply-To: References: Message-ID: Hi guys, Some one can help with this? I have the same issue in the logs. Regards, Alejandro El 15 de marzo de 2011 10:51, Roberto O. Fern?ndez Crisial < roberto.fernandezcrisial at gmail.com> escribi?: > Hi guys, > > I need some help and I think you can help me. A few days ago I was realized > that Varnish is showing some error messages when debug mode is enable on > varnishlog: > > 4741 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2959 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2591 Debug c "Write error, retval = -1, len = 168289, errno = > Broken pipe" > 3517 Debug c "Write error, retval = -1, len = 114421, errno = > Broken pipe" > > I want to know what are those error messages and why are they generated. > Any suggestion? > > Thank you! > Roberto O. Fern?ndez Crisial > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 04:44:18 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:44:18 -0400 Subject: Child Process Killed Message-ID: The child process got killed abruptly. I am attaching a bunch of munin graphs, relevant syslog, the current varnishstat -1 output. I am running Varnish 2.1.5 on a 64 bit machine with the following command: sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 My VCL is fairly simple, and I think has nothing to do with the error. Any help would be appreciated. -T -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish.munin.tz Type: application/octet-stream Size: 88817 bytes Desc: not available URL: From nadahalli at gmail.com Wed Mar 23 04:46:05 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:46:05 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Resending the other attachments (syslog and varnishstat) -T On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > The child process got killed abruptly. > > I am attaching a bunch of munin graphs, relevant syslog, the current > varnishstat -1 output. > > I am running Varnish 2.1.5 on a 64 bit machine with the following command: > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 > > My VCL is fairly simple, and I think has nothing to do with the error. > > Any help would be appreciated. > > -T > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 5409469 482.69 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5409469 482.69 Client requests received cache_hit 5358032 478.10 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 51434 4.59 Cache misses backend_conn 51434 4.59 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 51433 4.59 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 200 . N struct sess_mem n_sess 100 . N struct sess n_object 45560 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 45669 . N struct objectcore n_objecthead 45673 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 200 . N worker threads n_wrk_create 200 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 28 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 5763 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 298470 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5409362 482.68 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 5409469 482.69 Total Sessions s_req 5409469 482.69 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 51433 4.59 Total fetch s_hdrbytes 1189049759 106098.85 Total header bytes s_bodybytes 5149727833 459509.93 Total body bytes sess_closed 5409469 482.69 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 226158115 20180.08 SHM records shm_writes 21752857 1941.01 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 27172 2.42 SHM MTX contention shm_cycles 97 0.01 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 102756 9.17 SMA allocator requests sma_nobj 91120 . SMA outstanding allocations sma_nbytes 72897093 . SMA outstanding bytes sma_balloc 82131133 . SMA bytes allocated sma_bfree 9234040 . SMA bytes free sms_nreq 1 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 418 . SMS bytes allocated sms_bfree 418 . SMS bytes freed backend_req 51434 4.59 Backend requests made n_vcl 9 0.00 N vcl total n_vcl_avail 9 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 155 . N total active purges n_purge_add 155 0.01 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 43087 3.84 N objects tested n_purge_re_test 561069 50.06 N regexps tested against n_purge_dups 140 0.01 N duplicate purges removed hcb_nolock 5409434 482.68 HCB Lookups without lock hcb_lock 45671 4.08 HCB Lookups with lock hcb_insert 45671 4.08 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11207 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858414] python invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858420] python cpuset=/ mems_allowed=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858424] Pid: 5766, comm: python Not tainted 2.6.32-305-ec2 #9-Ubuntu Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858426] Call Trace: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858436] [] ? cpuset_print_task_mems_allowed+0x8c/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858442] [] oom_kill_process+0xe3/0x210 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858445] [] __out_of_memory+0x50/0xb0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858448] [] out_of_memory+0x5f/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858451] [] __alloc_pages_slowpath+0x4c1/0x560 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858455] [] __alloc_pages_nodemask+0x171/0x180 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858458] [] __do_page_cache_readahead+0xd7/0x220 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858461] [] ra_submit+0x1c/0x20 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858464] [] filemap_fault+0x3fe/0x450 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858468] [] __do_fault+0x50/0x680 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858470] [] handle_mm_fault+0x260/0x4f0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858476] [] do_page_fault+0x147/0x390 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858479] [] page_fault+0x28/0x30 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858481] Mem-Info: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858483] DMA per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858484] CPU 0: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858486] CPU 1: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858487] DMA32 per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858489] CPU 0: hi: 155, btch: 38 usd: 146 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858491] CPU 1: hi: 155, btch: 38 usd: 178 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858492] Normal per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858493] CPU 0: hi: 155, btch: 38 usd: 136 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858495] CPU 1: hi: 155, btch: 38 usd: 43 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858499] active_anon:1561108 inactive_anon:312311 isolated_anon:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858500] active_file:133 inactive_file:251 isolated_file:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] unevictable:0 dirty:9 writeback:0 unstable:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] free:10533 slab_reclaimable:711 slab_unreclaimable:7610 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858503] mapped:104 shmem:46 pagetables:0 bounce:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858508] DMA free:16384kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:16160kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858513] lowmem_reserve[]: 0 4024 7559 7559 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858519] DMA32 free:19904kB min:5916kB low:7392kB high:8872kB active_anon:3246376kB inactive_anon:649464kB active_file:0kB inactive_file:448kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:4kB writeback:0kB mapped:164kB shmem:16kB slab_reclaimable:212kB slab_unreclaimable:5428kB kernel_stack:112kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:59 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858524] lowmem_reserve[]: 0 0 3534 3534 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858530] Normal free:5844kB min:5196kB low:6492kB high:7792kB active_anon:2998056kB inactive_anon:599780kB active_file:532kB inactive_file:556kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3619728kB mlocked:0kB dirty:32kB writeback:0kB mapped:252kB shmem:168kB slab_reclaimable:2632kB slab_unreclaimable:25012kB kernel_stack:2272kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:672 all_unreclaimable? no Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858534] lowmem_reserve[]: 0 0 0 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858536] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 4*4096kB = 16384kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858543] DMA32: 2942*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 19904kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858549] Normal: 471*4kB 3*8kB 6*16kB 2*32kB 3*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 5844kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858555] 477 total pagecache pages Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858557] 0 pages in swap cache Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858559] Swap cache stats: add 0, delete 0, find 0/0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858560] Free swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858561] Total swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882870] 1968128 pages RAM Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882873] 61087 pages reserved Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882874] 1106 pages shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882875] 1894560 pages non-shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882878] Out of memory: kill process 1491 (varnishd) score 2838972 or a child Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882892] Killed process 1492 (varnishd) Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (1492) died signal=9 Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child cleanup complete Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: child (21675) Started Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Child starts From nadahalli at gmail.com Wed Mar 23 04:48:35 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:48:35 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I am running my Python origin-server on the same machine. It seems like the Python interpreter caused the OOM killer to kill Varnish. If that's the case, is there anything I can do prevent this from happening? -T On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > Resending the other attachments (syslog and varnishstat) > > -T > > > On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > >> The child process got killed abruptly. >> >> I am attaching a bunch of munin graphs, relevant syslog, the current >> varnishstat -1 output. >> >> I am running Varnish 2.1.5 on a 64 bit machine with the following command: >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >> >> My VCL is fairly simple, and I think has nothing to do with the error. >> >> Any help would be appreciated. >> >> -T >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 05:27:48 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 00:27:48 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I found a couple of other threads involving the OOM killer. http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-April/002722.html http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-June/002838.html In both these cases, they had quite a few purge requests which added purge records which never got expired and that might have caused the out of control memory growth. I have a similar situation - with purges happening every 15 minutes. Mar 22 06:31:35 ip-10-116-105-253 varnishd[1491]: CLI telnet 127.0.0.1 60642 127.0.0.1 2000 Rd purge req.url ~ ^/\\?idsite=18&url=http%3A%2F% 2Fwww.people.com%2Fpeople%2Farticle%2F These are essentially the 'same' purges that are fired every 15 minutes. Do I have to setup the ban lurker to avoid out of control memory growth? -T On Tue, Mar 22, 2011 at 11:48 PM, Tejaswi Nadahalli wrote: > I am running my Python origin-server on the same machine. It seems like the > Python interpreter caused the OOM killer to kill Varnish. If that's the > case, is there anything I can do prevent this from happening? > > -T > > > On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > >> Resending the other attachments (syslog and varnishstat) >> >> -T >> >> >> On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: >> >>> The child process got killed abruptly. >>> >>> I am attaching a bunch of munin graphs, relevant syslog, the current >>> varnishstat -1 output. >>> >>> I am running Varnish 2.1.5 on a 64 bit machine with the following >>> command: >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >>> >>> My VCL is fairly simple, and I think has nothing to do with the error. >>> >>> Any help would be appreciated. >>> >>> -T >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 23 08:11:52 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 23 Mar 2011 08:11:52 +0100 Subject: Child Process Killed In-Reply-To: (Tejaswi Nadahalli's message of "Tue, 22 Mar 2011 23:48:35 -0400") References: Message-ID: <874o6uz4xz.fsf@qurzaw.varnish-software.com> ]] Tejaswi Nadahalli | I am running my Python origin-server on the same machine. It seems like the | Python interpreter caused the OOM killer to kill Varnish. If that's the | case, is there anything I can do prevent this from happening? Add more memory, don't leak memory in your python process, limit the amount of memory varnish uses, add swap or change the oom score for varnish? -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From andrea.campi at zephirworks.com Wed Mar 23 16:08:06 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Wed, 23 Mar 2011 16:08:06 +0100 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated Message-ID: Hi, I am currently working with a client to implement ESI + gzip with trunk Varnish; since phk asked for help in breaking it, here we are :) Some background: the customer is a publishing company and we are working on the website for their daily newspaper, so ease of integration with their CMS and timely expiration of ESI fragments is paramount. Because of this, I'm using the classic technique of having the page esi:include a document with very short TTL, that in turn esi:includes the real fragment (that has a long TTL), including in the URL the last-modification TTL. So we have something like: index.shtml -> /includes2010/header.esi/homepage -> /includes2010/header.shtml/homepage This works nicely when I strip the Accept-Encoding header, on both 2.1.5 and trunk. But it breaks down with gzip compression on: Safari and Chrome give up at the point where the first ESI include is, Firefox mostly just errors out; all of them sometimes provide vague errors. The best info I have is from: "curl | zip" gzip: out: invalid compressed data--format violated Unsetting bereq.http.accept-encoding on the first ESI request didn't help; unsetting it on the second request *did* help, fixing the issue for all browsers. Setting TTL=0 for /includes2010/header.shtml/homepage didn't make a difference, nor did changing vcl_recv to return(pass), so it seems it's not a matter of what is stored in the cache. [.... a couple of hours later ....] Long story short, I finally realized the problem is not with Varnish per se, but with the office proxy (Squid 2.7.STABLE9); it seems to corrupt the gzip stream just after the 00 00 FF FF sequence: -0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc +0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc -0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 +0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 and so on. However, what I wrote above is still true: if I only have one level of ESI include, or if I have two but the inner one is not originally gzip, Squid doesn't corrupt the content. I have a few gzipped files, as well as sample vcl and html files (not that these matter after all), I can send them if those would help. Andrea From ronan at iol.ie Wed Mar 23 17:25:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 23 Mar 2011 16:25:43 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > | The current load is light (up to a couple of thousand active sessions, > | peak is around five thousand). Varnish has a fairly simple config with > | a director consisting of two Apache backends: > > This looks a bit odd: > > | backend backend1 { > | .host = "1.2.3.4"; > | .port = "80"; > | .connect_timeout = 5s; > | .first_byte_timeout = 90s; > | .between_bytes_timeout = 90s; > | A typical request is below. The first attempt fails with: > | > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | there is presumably a restart and the second attempt (sometimes to > | backend1, sometimes backend2) fails with: > | > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? Varnishlog and output from TCP for a typical occurance is below. If you need any further details let me know. -Ronan 16 ReqStart c 202.168.71.170 39173 403520520 16 RxRequest c POST 16 RxURL c /ajax.php 16 RxProtocol c HTTP/1.1 16 RxHeader c Via: 1.1 APSRVMY35001 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 16 RxHeader c Referer: http://www.redcafe.net/f8/ 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 16 RxHeader c Host: www.redcafe.net 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Accept-Language: en-gb,en;q=0.5 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 RxHeader c Keep-Alive: 115 16 RxHeader c X-Requested-With: XMLHttpRequest 16 RxHeader c Pragma: no-cache 16 RxHeader c Cache-Control: no-cache 16 RxHeader c Connection: Keep-Alive 16 RxHeader c Content-Length: 82 16 VCL_call c recv 16 VCL_return c pass 16 VCL_call c hash 16 VCL_return c hash 16 VCL_call c pass 16 VCL_return c pass 16 Backend c 53 redcafe redcafe1 53 TxRequest b POST 53 TxURL b /ajax.php 53 TxProtocol b HTTP/1.1 53 TxHeader b Via: 1.1 APSRVMY35001 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 53 TxHeader b Referer: http://www.redcafe.net/f8/ 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 53 TxHeader b Host: www.redcafe.net 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 53 TxHeader b Accept-Language: en-gb,en;q=0.5 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 53 TxHeader b X-Requested-With: XMLHttpRequest 53 TxHeader b Pragma: no-cache 53 TxHeader b Cache-Control: no-cache 53 TxHeader b Content-Length: 82 53 TxHeader b X-Forwarded-For: 202.168.71.170 53 TxHeader b X-Varnish: 403520520 16 FetchError c http first read error: -1 0 (Success) 53 BackendClose b redcafe1 16 Backend c 52 redcafe redcafe2 52 TxRequest b POST 52 TxURL b /ajax.php 52 TxProtocol b HTTP/1.1 52 TxHeader b Via: 1.1 APSRVMY35001 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 52 TxHeader b Referer: http://www.redcafe.net/f8/ 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 52 TxHeader b Host: www.redcafe.net 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 52 TxHeader b Accept-Language: en-gb,en;q=0.5 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 52 TxHeader b X-Requested-With: XMLHttpRequest 52 TxHeader b Pragma: no-cache 52 TxHeader b Cache-Control: no-cache 52 TxHeader b Content-Length: 82 52 TxHeader b X-Forwarded-For: 202.168.71.170 52 TxHeader b X-Varnish: 403520520 16 FetchError c backend write error: 11 (Resource temporarily unavailable) 52 BackendClose b redcafe2 16 VCL_call c error 16 VCL_return c deliver 16 VCL_call c deliver 16 VCL_return c deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Retry-After: 0 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Content-Length: 2623 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT 16 TxHeader c X-Varnish: 403520520 16 TxHeader c Age: 2 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: close 16 Length c 2623 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 16 SessionClose c error 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 First attempt (redcafe1 backend) 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ 0x0030: 101a 178d .... 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke 0x04b0: 6e n 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c 0x0030: 101a 178d .... 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... 0x0030: 0c9d 9863 ...c Second attempt (redcafe2 backend) 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W 0x0030: 1019 bc2d ...- 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W From phk at phk.freebsd.dk Wed Mar 23 19:24:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 23 Mar 2011 18:24:26 +0000 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated In-Reply-To: Your message of "Wed, 23 Mar 2011 16:08:06 +0100." Message-ID: <13858.1300904666@critter.freebsd.dk> In message , Andr ea Campi writes: >Long story short, I finally realized the problem is not with Varnish >per se, but with the office proxy (Squid 2.7.STABLE9); it seems to >corrupt the gzip stream just after the 00 00 FF FF sequence: > >-0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc >+0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc > >-0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 >+0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 > >and so on. We found a similar issue in ngnix last week: A 1 byte chunked encoding get zap'ed to 0x00 just like what you show. Are you sure there is no ngnix instance involved ? It would be weird of both squid and ngnix has the same bug ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From TFigueiro at au.westfield.com Wed Mar 23 21:33:06 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 07:33:06 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: From: Tejaswi Nadahalli > I am running my Python origin-server on the same machine. It seems like > the Python interpreter caused the OOM killer to kill Varnish. If that's > the case, is there anything I can do prevent this from happening? I've been meaning to write-up a blog entry regarding the OOM killer in Linux (what a dumb idea) but in the mean time this should get you started. The OOM Killer is there because Linux, by default in most distros, allocates more memory than available (swap+ram) on the assumption that applications will never need it (this is called overcommiting). Mostly this is true but when it's not the oom_kill is called to free-up some memory so the kernel can keep its promise. Usually it does a shit job (as you just noticed) and I hate it so much. One way to solve this is to tweak oom_kill so it doesn't kill varnish processes. It's a bit cumbersome because you need to do that based on the PID, which you only learn after the process has started, leaving room for some nifty race conditions. Still, adding these to Varnish's init scripts should do what you need - look up online for details. The other way is to disable memory overcommit. Add to /etc/sysctl.conf: # Disables memory overcommit vm.overcommit_memory = 2 # Tweak to fool VM (read manual for setting above) vm.overcommit_ratio = 100 # swap only if really needed vm.swappiness = 10 and sudo /sbin/sysctl -e -p /etc/sysctl.conf The problem with setting overcommit_memory to 2 is that the VM will not allocate more memory than you have available (the actual rule is a function of RAM, swap and overcommit_ratio, hence the tweak above). This could be a problem for Varnish depending on the storage used. The file storage will mmap the file, resulting in a VM size as large as the file. If you don't have enough RAM the kernel will deny memory allocation and varnish will fail to start. At this point you either buy more RAM or tweak your swap size to account for greedy processes (ie.: processes that allocate a lot of memory but never use it). TL;DR: buy more memory; get rid of memory hungry scripts in your varnish box Good luck. ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nadahalli at gmail.com Wed Mar 23 22:10:20 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 17:10:20 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Thanks for the detailed explanation of why the OOM Killer strikes. I have dome some reading about it, and am tinkering with how to stop it from killing varnishd. What I am curious about is - how did the OOM killer get invoked at all? My python process is fairly basic, and wouldn't have consumed any memory at all. When varnish reaches it's malloc limit, I thought cached objects would start getting Nuked. My LRU nuke counters were 0 through the process. So, instead of nuking objects gracefully, I had a varnish-child-restart. This is what I am worried about. If I can get nuking by reducing the overall memory footprint by reducing the malloc limits, I will gladly do it. Do you think that might help? -T On Wed, Mar 23, 2011 at 4:33 PM, Thiago Figueiro wrote: > From: Tejaswi Nadahalli > > I am running my Python origin-server on the same machine. It seems like > > the Python interpreter caused the OOM killer to kill Varnish. If that's > > the case, is there anything I can do prevent this from happening? > > > I've been meaning to write-up a blog entry regarding the OOM killer in > Linux (what a dumb idea) but in the mean time this should get you started. > > The OOM Killer is there because Linux, by default in most distros, > allocates more memory than available (swap+ram) on the assumption that > applications will never need it (this is called overcommiting). Mostly this > is true but when it's not the oom_kill is called to free-up some memory so > the kernel can keep its promise. Usually it does a shit job (as you just > noticed) and I hate it so much. > > One way to solve this is to tweak oom_kill so it doesn't kill varnish > processes. It's a bit cumbersome because you need to do that based on the > PID, which you only learn after the process has started, leaving room for > some nifty race conditions. Still, adding these to Varnish's init scripts > should do what you need - look up online for details. > > The other way is to disable memory overcommit. Add to /etc/sysctl.conf: > > # Disables memory overcommit > vm.overcommit_memory = 2 > # Tweak to fool VM (read manual for setting above) > vm.overcommit_ratio = 100 > # swap only if really needed > vm.swappiness = 10 > > and sudo /sbin/sysctl -e -p /etc/sysctl.conf > > The problem with setting overcommit_memory to 2 is that the VM will not > allocate more memory than you have available (the actual rule is a function > of RAM, swap and overcommit_ratio, hence the tweak above). > > This could be a problem for Varnish depending on the storage used. The > file storage will mmap the file, resulting in a VM size as large as the > file. If you don't have enough RAM the kernel will deny memory allocation > and varnish will fail to start. At this point you either buy more RAM or > tweak your swap size to account for greedy processes (ie.: processes that > allocate a lot of memory but never use it). > > > TL;DR: buy more memory; get rid of memory hungry scripts in your varnish > box > > > Good luck. > > > ______________________________________________________ > CONFIDENTIALITY NOTICE > This electronic mail message, including any and/or all attachments, is for > the sole use of the intended recipient(s), and may contain confidential > and/or privileged information, pertaining to business conducted under the > direction and supervision of the sending organization. All electronic mail > messages, which may have been established as expressed views and/or opinions > (stated either within the electronic mail message or any of its > attachments), are left to the sole responsibility of that of the sender, and > are not necessarily attributed to the sending organization. Unauthorized > interception, review, use, disclosure or distribution of any such > information contained within this electronic mail message and/or its > attachment(s), is (are) strictly prohibited. If you are not the intended > recipient, please contact the sender by replying to this electronic mail > message, along with the destruction all copies of the original electronic > mail message (along with any attachments). > ______________________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From TFigueiro at au.westfield.com Thu Mar 24 03:39:24 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 13:39:24 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: > Do you think that might help? You're looking for /proc/PID/oom_score; here, read this: http://lwn.net/Articles/317814/ Reducing memory usage will help, yes. And what Tollef said in his reply is the practical approach: add ram and/or swap. At some point the sum of the RESIDENT processes memory size is bigger than SWAP+RAM, and this is what triggers oom_kill. The other way around is what you suggested yourself: reduce memory usage. G'luck ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nfn at gmx.com Fri Mar 25 10:47:33 2011 From: nfn at gmx.com (Nuno Neves) Date: Fri, 25 Mar 2011 09:47:33 +0000 Subject: Using cron to purge cache Message-ID: <20110325094733.232990@gmx.com> Hello, I have a file named varnish-purge with this content that it's executed daily by cron, but the objects remain in the cache, even when I run it manually. -------------------------------------------------------------------------------------------- #!/bin/sh /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" -------------------------------------------------------------------------------------------- The cron file is: ----------------------------------- #!/bin/sh /usr/local/bin/varnish-purge ----------------------------------- I alread used: /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge '.*' and /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge . without succes. The only way to purge the cache is restarting varnish. I'm using vanish 2.1.5 from http://repo.varnish-cache.org http://repo.varnish-cache.org/debian/GPG-key.txt Any guidance will be apreciated. Thanks Nuno -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 25 10:54:47 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 25 Mar 2011 10:54:47 +0100 Subject: Using cron to purge cache In-Reply-To: <20110325094733.232990@gmx.com> References: <20110325094733.232990@gmx.com> Message-ID: Hi Nuno. On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: > Hello, > > I have a file named varnish-purge with this content that it's executed daily > by cron, but the objects remain in the cache, even when I run it manually. > -------------------------------------------------------------------------------------------- > #!/bin/sh > > /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" url.purge will create what we call a "ban", or a filter. Think of it as a lazy purge. The objects will remain in memory but killed during lookup. If you want to kill the objects from cache you'd have to set up the ban lurker to walk the objects and expunge them. If you want the objects to actually disappear from memory right away you would have to do a HTTP PURGE call, and setting the TTL to zero, but that means you'd have to kill off every URL in cache. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From ronan at iol.ie Fri Mar 25 11:12:54 2011 From: ronan at iol.ie (Ronan Mullally) Date: Fri, 25 Mar 2011 10:12:54 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: I am still encountering this problem - about 1% on average of POSTs are failing with a 503 when there is no problem apparent on the back-ends. GETs are not affected: Hour GETs Fails POSTs Fails 00:00 38060 0 (0.00%) 480 2 (0.42%) 01:00 34051 0 (0.00%) 412 0 (0.00%) 02:00 29881 0 (0.00%) 383 2 (0.52%) 03:00 25741 0 (0.00%) 374 1 (0.27%) 04:00 22296 0 (0.00%) 326 2 (0.61%) 05:00 22594 0 (0.00%) 349 20 (5.73%) 06:00 31422 0 (0.00%) 408 6 (1.47%) 07:00 58746 0 (0.00%) 656 6 (0.91%) 08:00 74307 0 (0.00%) 870 4 (0.46%) 09:00 87386 0 (0.00%) 1280 8 (0.62%) 10:00 51744 0 (0.00%) 741 8 (1.08%) 11:00 50060 0 (0.00%) 825 1 (0.12%) 12:00 58573 0 (0.00%) 664 5 (0.75%) 13:00 60548 0 (0.00%) 735 7 (0.95%) 14:00 60242 0 (0.00%) 875 8 (0.91%) 15:00 61427 0 (0.00%) 778 3 (0.39%) 16:00 66480 0 (0.00%) 810 4 (0.49%) 17:00 65749 0 (0.00%) 836 12 (1.44%) 18:00 64312 0 (0.00%) 732 3 (0.41%) 19:00 60930 0 (0.00%) 652 5 (0.77%) 20:00 59646 0 (0.00%) 626 1 (0.16%) 21:00 61218 0 (0.00%) 674 3 (0.45%) 22:00 55908 0 (0.00%) 598 3 (0.50%) 23:00 45173 0 (0.00%) 560 1 (0.18%) There was another poster on this thread with the same problem which suggests a possible varnish problem rather than anything specific to my setup. Does anybody have any ideas? -Ronan On Wed, 23 Mar 2011, Ronan Mullally wrote: > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > > > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > > | The current load is light (up to a couple of thousand active sessions, > > | peak is around five thousand). Varnish has a fairly simple config with > > | a director consisting of two Apache backends: > > > > This looks a bit odd: > > > > | backend backend1 { > > | .host = "1.2.3.4"; > > | .port = "80"; > > | .connect_timeout = 5s; > > | .first_byte_timeout = 90s; > > | .between_bytes_timeout = 90s; > > | A typical request is below. The first attempt fails with: > > | > > | 33 FetchError c http first read error: -1 0 (Success) > > > > This just means the backend closed the connection on us. > > > > | there is presumably a restart and the second attempt (sometimes to > > | backend1, sometimes backend2) fails with: > > | > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > > > This is a timeout, however: > > > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > > > That 2.89s backend response time doesn't add up with your timeouts. Can > > you see if you can get a tcpdump of what's going on? > > Varnishlog and output from TCP for a typical occurance is below. If you > need any further details let me know. > > > -Ronan > > 16 ReqStart c 202.168.71.170 39173 403520520 > 16 RxRequest c POST > 16 RxURL c /ajax.php > 16 RxProtocol c HTTP/1.1 > 16 RxHeader c Via: 1.1 APSRVMY35001 > 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 16 RxHeader c Referer: http://www.redcafe.net/f8/ > 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 16 RxHeader c Host: www.redcafe.net > 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 16 RxHeader c Accept-Language: en-gb,en;q=0.5 > 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 16 RxHeader c Keep-Alive: 115 > 16 RxHeader c X-Requested-With: XMLHttpRequest > 16 RxHeader c Pragma: no-cache > 16 RxHeader c Cache-Control: no-cache > 16 RxHeader c Connection: Keep-Alive > 16 RxHeader c Content-Length: 82 > 16 VCL_call c recv > 16 VCL_return c pass > 16 VCL_call c hash > 16 VCL_return c hash > 16 VCL_call c pass > 16 VCL_return c pass > 16 Backend c 53 redcafe redcafe1 > 53 TxRequest b POST > 53 TxURL b /ajax.php > 53 TxProtocol b HTTP/1.1 > 53 TxHeader b Via: 1.1 APSRVMY35001 > 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 53 TxHeader b Referer: http://www.redcafe.net/f8/ > 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 53 TxHeader b Host: www.redcafe.net > 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 53 TxHeader b Accept-Language: en-gb,en;q=0.5 > 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 53 TxHeader b X-Requested-With: XMLHttpRequest > 53 TxHeader b Pragma: no-cache > 53 TxHeader b Cache-Control: no-cache > 53 TxHeader b Content-Length: 82 > 53 TxHeader b X-Forwarded-For: 202.168.71.170 > 53 TxHeader b X-Varnish: 403520520 > 16 FetchError c http first read error: -1 0 (Success) > 53 BackendClose b redcafe1 > 16 Backend c 52 redcafe redcafe2 > 52 TxRequest b POST > 52 TxURL b /ajax.php > 52 TxProtocol b HTTP/1.1 > 52 TxHeader b Via: 1.1 APSRVMY35001 > 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 52 TxHeader b Referer: http://www.redcafe.net/f8/ > 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 52 TxHeader b Host: www.redcafe.net > 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 52 TxHeader b Accept-Language: en-gb,en;q=0.5 > 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 52 TxHeader b X-Requested-With: XMLHttpRequest > 52 TxHeader b Pragma: no-cache > 52 TxHeader b Cache-Control: no-cache > 52 TxHeader b Content-Length: 82 > 52 TxHeader b X-Forwarded-For: 202.168.71.170 > 52 TxHeader b X-Varnish: 403520520 > 16 FetchError c backend write error: 11 (Resource temporarily unavailable) > 52 BackendClose b redcafe2 > 16 VCL_call c error > 16 VCL_return c deliver > 16 VCL_call c deliver > 16 VCL_return c deliver > 16 TxProtocol c HTTP/1.1 > 16 TxStatus c 503 > 16 TxResponse c Service Unavailable > 16 TxHeader c Server: Varnish > 16 TxHeader c Retry-After: 0 > 16 TxHeader c Content-Type: text/html; charset=utf-8 > 16 TxHeader c Content-Length: 2623 > 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT > 16 TxHeader c X-Varnish: 403520520 > 16 TxHeader c Age: 2 > 16 TxHeader c Via: 1.1 varnish > 16 TxHeader c Connection: close > 16 Length c 2623 > 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 > 16 SessionClose c error > 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 > > First attempt (redcafe1 backend) > > 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 > 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 > 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c > 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s > 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 > 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a > 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf > 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 > 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke > 0x04b0: 6e n > 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 > 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q > 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 > 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, > 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. > 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... > 0x0030: 0c9d 9863 ...c > > > Second attempt (redcafe2 backend) > > 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 > 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg > 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W > 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... > 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 > 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg > 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W > 0x0030: 1019 bc2d ...- > 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 > 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ > 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 > 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ > 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From mrbits.dcf at gmail.com Fri Mar 25 11:38:51 2011 From: mrbits.dcf at gmail.com (MrBiTs) Date: Fri, 25 Mar 2011 07:38:51 -0300 Subject: Using cron to purge cache Message-ID: <4D8C70BB.8010408@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/25/2011 06:54 , Per Buer wrote: > Hi Nuno. > > On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: >> Hello, >> >> I have a file named varnish-purge with this content that it's executed daily >> by cron, but the objects remain in the cache, even when I run it manually. >> -------------------------------------------------------------------------------------------- >> #!/bin/sh >> >> /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" > > url.purge will create what we call a "ban", or a filter. Think of it > as a lazy purge. The objects will remain in memory but killed during > lookup. If you want to kill the objects from cache you'd have to set > up the ban lurker to walk the objects and expunge them. > > If you want the objects to actually disappear from memory right away > you would have to do a HTTP PURGE call, and setting the TTL to zero, > but that means you'd have to kill off every URL in cache. > I think we can do a nice discussion here. First, and this is a big off-topic here, if I need to purge all contents from time to time, it's better to create a huge webserver structure, to support requests, change the application a little to generate static pages from time to time to not increase the database load and forget about Varnish. But this is discussion to another list, another time. Second, is this recommended ? I mean, purge all URL, all contents in cache will do varnish to request this content again to backend, increasing server load and it can cause problems. What to you guys think about it ? I think it is better to have a purge system (like a message queue or a form to kill some objetcs) to remove only really wanted objects. If you need to purge all varnish contents, why not just restart varnish from time to time ? But, again, all backend issues must be considerated here. - -- LLAP .0. MrBiTs - mrbits.dcf at gmail.com ..0 GnuPG - http://keyserver.fug.com.br:11371/pks/lookup?op=get&search=0x6EC818FC2B3CA5AB 000 http://www.mrbits.com.br -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) iQEcBAEBCAAGBQJNjHC7AAoJEG7IGPwrPKWr3kkH/1zim9haorjg4qbrLeefsyjd chBzbCdNwNUPqjbKW+V0hyw7OZY80boMCfD7ZIWgWd+Dy5kCou01D7qebRGYGHt9 oaSmgNFXISMUwOtZwl4F5uKsKhxH7ZtBdJncomoSz3+Apl9yY3gB0aYYfNoi8YoS btgWsNKBzWQTR2pFUz8dYqumrr0aQU3sQRhqBQ7YU165GnhzBSAOxQuTXwM5Lp+j IPLwfWuPaPdSt5nhueDrovdQqHGctWDjkB2JGpi0M8ALvPHETKIZA5oBMHXuXhXY uURPvOsLm2bFmhzDYG3Zr0sJ81ek4K7T2LXd4yT9uiqisnyd5WjbfTH6XS4keDY= =x2+0 -----END PGP SIGNATURE----- From contact at jpluscplusm.com Fri Mar 25 11:55:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 25 Mar 2011 10:55:14 +0000 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: On 21 March 2011 15:08, Jonathan Matthews wrote: > Hi all - > > I've got some long-running squid instances, mainly used for caching > medium-sized binaries, which I'd like to replace with some varnish > instances. ?The binaries are quite heavy to regenerate on the distant > origin servers and there's a large number of them. ?Hence, I'd like to > use the squid cache as a target to warm a (new, nearby) varnish > instance instead of just pointing the varnish instance at the remote > origin servers. > > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. ?I've looked around for people trying the > same thing, but haven't come across any success stories. ?I'm > perfectly prepared to be told that I simply have to reconfigure the > squid instances in mixed proxy/origin-server mode, and that there's no > way around it, but I thought I'd ask the list for guidance first ... > > Any thoughts? Anyone? All opinions welcome ... :-) -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From lampe at hauke-lampe.de Sat Mar 26 18:05:03 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sat, 26 Mar 2011 18:05:03 +0100 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D8E1CBF.6060802@hauke-lampe.de> On 25.03.2011 11:55, Jonathan Matthews wrote: > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. Do they really? I would think squid just pipes a CONNECT request wihout caching the contents, just like varnish does. I'm not quite sure about that, though. What I *think* you need to do is to rewrite the request URL so that it contains the hostname. An incoming request like this. | GET /foo | Host: example.com should be passed to squid in this form: | GET http://example.com/foo In VCL: set req.backend = squid; if (req.url ~ "^/" && req.http.Host) { set req.url = "http://" req.http.Host req.url; unset req.http.Host; } Hauke. From iliakan at gmail.com Sat Mar 26 21:12:48 2011 From: iliakan at gmail.com (Ilia Kantor) Date: Sat, 26 Mar 2011 23:12:48 +0300 Subject: Current sessions count Message-ID: Hello, How can I get a count of current Varnish sessions from inside VCL? Inline C will do. Approximate will do. I need it to enable DDOS protections if the count of current connects exceeds given constant. Maybe there is a VSL_stats field? -- --- Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Mon Mar 28 12:52:13 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Mon, 28 Mar 2011 12:52:13 +0200 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: <20110328105213.GA9172@localhost.localdomain> On Mon, Mar 21, 2011 at 10:39:50PM -0400, AD wrote: > On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > > > Per lots of posts on this list, obj is now baresp in newer Varnish > > versions. It sounds like the documentation for this change hasn't been > > fully propagated. Small clarification (which should go into the docs, somewhere): obj.* still exists. beresp is the backend response which you can modify in vcl_fetch. Upon exiting vcl_fetch, beresp is used to allocate space for and populate the obj-structure. The only part of obj.* that is available in vcl_deliver is obj.hits. What you can do is store the ttl on req.http somewhere (assuming the conversions work) in vcl_hit, then copy it onto resp.* in vcl_deliver. - Kristian From johnson at nmr.mgh.harvard.edu Mon Mar 28 13:23:41 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 07:23:41 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 Message-ID: Hi, Currently running 2.0.5. Has been working so well as a rule we just forgot about it. Would like to update to 2.1.5 because 2.0.5 hung up last week. I saw mention of hang bug in 2.0.5 but this is the first time we've felt it. I made a change to the config a while back to prevent double caching on a server name alternate name. Question, this is a plug n' play, yes? I can just install the new RPM and it will take off were it was stopped? No config differences that are applicable? That would be bloody awesome. If the is anything that will cause I problem I'd like to know about it before the update. Want the server down as short a time as possible. Tnx. ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |"Life is chaos. Chaos is life. Control is an Charlestown, MA., 02129 USA | illusion." Trance Gemini ------------------------------------------------------------------------------- The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. From s.welschhoff at lvm.de Mon Mar 28 13:24:36 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Mon, 28 Mar 2011 13:24:36 +0200 Subject: localhost Message-ID: Hello, I am very new in varnish. I try to get a return code 200 when varnish opens the default backend. The default backend will be localhost. Is it possible? Thanks for your help. Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From samcrawford at gmail.com Mon Mar 28 17:06:35 2011 From: samcrawford at gmail.com (Sam Crawford) Date: Mon, 28 Mar 2011 16:06:35 +0100 Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: It'd probably be wise to spin up a 2.1.5 instance of Varnish on a development server using your production VCL. If it parses it okay and starts, then you should be fine. The only change that may catch you out is that obj.* changed to beresp.* from 2.0.x to 2.1.x. Thanks, Sam On 28 March 2011 12:23, Chris Johnson wrote: > ? ? Hi, > > ? ? Currently running 2.0.5. ?Has been working so well as a rule we > just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung > up last week. ?I saw mention of hang bug in 2.0.5 but this is the first > time we've felt it. > > ? ? I made a change to the config a while back to prevent double > caching on a server name alternate name. ?Question, this is a plug n' play, > yes? ?I can just install the new RPM and it will take off were it was > stopped? ?No config differences that are applicable? ?That would be bloody > awesome. ?If the is anything that will cause I problem I'd like > to know about it before the update. ?Want the server down as short a > time as possible. ?Tnx. > > ------------------------------------------------------------------------------- > Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu > Systems Administrator ? ? ? |Web: > ?http://www.nmr.mgh.harvard.edu/~johnson > NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 > Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 > 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an > Charlestown, MA., 02129 USA | illusion." ?Trance Gemini > ------------------------------------------------------------------------------- > > > The information in this e-mail is intended only for the person to whom it is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From mhettwer at team.mobile.de Mon Mar 28 17:24:51 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Mon, 28 Mar 2011 16:24:51 +0100 Subject: localhost In-Reply-To: Message-ID: On 28.03.11 13:24, "Stefan Welschhoff" wrote: >Hello, Hi there, > >I am very new in varnish. I try to get a return code 200 when varnish >opens the default backend. The default backend will be localhost. Is it >possible? Short answer: Yes, if your backend behaves well. Little longer answer: If you configure a backend like that: backend default { .host = "127.0.0.1"; .port = "8080"; } And assuming that your backend really listens on localhost:8080, then use the following in vcl_recv: sub vcl_recv { set req.backend = default; } Now start varnish, and assuming you let varnish listen on localhost:80 you can do something like that wget -0 /dev/null -q -S http://localhost/foo.txt The request GET /foo.txt goes to varnish and he forwards this to your backend at localhost:8080. If "wget -0 /dev/null -q -S http://localhost:8080/foo.txt" works, then "wget -0 /dev/null -q -S http://localhost/foo.txt" will work too. Cheers, Marian PS.: Start with the fine documentation of varnish! From johnson at nmr.mgh.harvard.edu Mon Mar 28 17:54:23 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 11:54:23 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: Well my config has the following in vcl fetch sub vcl_fetch { if (!obj.cacheable) { return (pass); } if (req.url ~ "^/fswiki") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (req.url ~ "^/wiki/fswiki_htdocs") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (obj.http.Set-Cookie) { return (pass); } set obj.prefetch = -30s; return (deliver); } But it's isolated. Presumably the 2.1.5 has its own. > It'd probably be wise to spin up a 2.1.5 instance of Varnish on a > development server using your production VCL. If it parses it okay and > starts, then you should be fine. > > The only change that may catch you out is that obj.* changed to > beresp.* from 2.0.x to 2.1.x. > > Thanks, > > Sam > > > On 28 March 2011 12:23, Chris Johnson wrote: >> ? ? Hi, >> >> ? ? Currently running 2.0.5. ?Has been working so well as a rule we >> just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung >> up last week. ?I saw mention of hang bug in 2.0.5 but this is the first >> time we've felt it. >> >> ? ? I made a change to the config a while back to prevent double >> caching on a server name alternate name. ?Question, this is a plug n' play, >> yes? ?I can just install the new RPM and it will take off were it was >> stopped? ?No config differences that are applicable? ?That would be bloody >> awesome. ?If the is anything that will cause I problem I'd like >> to know about it before the update. ?Want the server down as short a >> time as possible. ?Tnx. >> >> ------------------------------------------------------------------------------- >> Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu >> Systems Administrator ? ? ? |Web: >> ?http://www.nmr.mgh.harvard.edu/~johnson >> NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 >> Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 >> 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an >> Charlestown, MA., 02129 USA | illusion." ?Trance Gemini >> ------------------------------------------------------------------------------- >> >> >> The information in this e-mail is intended only for the person to whom it is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you in >> error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |I know a place where dreams are born and time Charlestown, MA., 02129 USA | is never planned. Neverland ------------------------------------------------------------------------------- From info at songteksten.nl Tue Mar 29 14:56:15 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 14:56:15 +0200 Subject: Mobile redirect question Message-ID: <1301403375.2060.20.camel@maikel-laptop> Hi, I'm using currently the following code to do a mobile site redirect. I found it somewhere on the internet. if ( req.http.user-agent ~ "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" && req.http.host ~ "(www.example.nl|www.example.be)" ) { set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", "http://m.\2"); error 750 req.http.newhost; The redirect from www.example.nl to m.example.nl works perfectly, only www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou the page.php?id=1 part). Is it possible to change the redirect so it also includes the rest of the url? Thanks, Maikel From bjorn at ruberg.no Tue Mar 29 15:04:33 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 29 Mar 2011 15:04:33 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91D8E1.9020905@ruberg.no> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? You don't show how error 750 is handled in your VCL, so it's a bit hard to tell how to improve your current config. However, the following URL should get you going: http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping -- Bj?rn From info at songteksten.nl Tue Mar 29 15:13:00 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 15:13:00 +0200 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <1301404380.2060.21.camel@maikel-laptop> The redirect looks like this: sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 302; return(deliver); } } Maikel On Tue, 2011-03-29 at 15:04 +0200, Bj?rn Ruberg wrote: > On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > > Hi, > > > > I'm using currently the following code to do a mobile site redirect. I > > found it somewhere on the internet. > > > > if ( req.http.user-agent ~ > > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > > > && req.http.host ~ "(www.example.nl|www.example.be)" > > > > ) { > > > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > > "http://m.\2"); > > error 750 req.http.newhost; > > > > The redirect from www.example.nl to m.example.nl works perfectly, only > > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > > the page.php?id=1 part). > > > > Is it possible to change the redirect so it also includes the rest of > > the url? > > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping > From bjorn at ruberg.no Tue Mar 29 15:16:17 2011 From: bjorn at ruberg.no (=?UTF-8?B?QmrDuHJuIFJ1YmVyZw==?=) Date: Tue, 29 Mar 2011 15:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301404380.2060.21.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> <1301404380.2060.21.camel@maikel-laptop> Message-ID: <4D91DBA1.9060108@ruberg.no> On 29. mars 2011 15:13, Maikel - Songteksten.nl wrote: > The redirect looks like this: > > sub vcl_error { > if (obj.status == 750) { > set obj.http.Location = obj.response; > set obj.status = 302; > return(deliver); > } > } You should still take a look at the URL I mentioned. And please don't top-post. -- Bj?rn From scaunter at topscms.com Tue Mar 29 15:53:41 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 29 Mar 2011 09:53:41 -0400 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C011D20CB@TMG-EVS02.torstar.net> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*ht c.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia. *|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*Son yEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcat el.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoM o.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^K WC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*| ^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.* |^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Win dows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping set req.http.newhost = regsub(req.url, "^/(.*)", "http://m.example.ca/\1"); Stef From geoff at uplex.de Tue Mar 29 16:16:17 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 29 Mar 2011 16:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91E9B1.9080407@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 02:56 PM, Maikel - Songteksten.nl wrote: > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; This is not what you asked about, but you're almost certainly losing a lot of performance with that regex. I would suggest that you put the check against req.http.host first (so that it doesn't bother with the pattern match when it doesn't have to), and above all, get rid of the leading and trailing .*'s in the regex. When you match a string against a regex like ".*foobar.*", it first matches the leading .* all the way until the end of the input string, overlooking any instance of "foobar" it sees along the way. Then it starts backtracking until it can match ".*f", then ".*fo", and so on. If it can match ".*foobar", it then takes the trouble to match the trailing .* to the end of the string. This is happening for all of the alternates in your regex until a match is found. phk's advice at VUG was: write your regex so that you can prove as quickly as possible that an input string *doesn't* match it. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkemwAAoJEOUwvh9pJNURLpEP/3t9+FTbTpzXmx9cts+2NOcr VJl4L+bi4+b+Zkn46yMgZjwLOyRWhqYQBfFozqKVOIX204jH0kzuHFqWwkF3luNO B6izenicK6jhQurdUsS4CTJ6j74yCgX1Jks9DC4Z3pLcwwY/swzJsV2ldKx9rqWJ sr6NJv8WxSz1Pb/i5BP6C7veplmO/rdKLZxzll5b7Qic6LicrRGG5ny0exUdysce q2ZlcAXCe7//7Ha7+1wlw5xXb3APcx96SB4bh+ASS63KgHevKkSwPOFFUdv//FzG xLEc/U5MqKjiFErx0IPzPZrD+E2Yf0PIVqRc9L7eL9g5SSJEfqwmFCrHucLYpmpW tpdDepflnUv1p7IkY0boNabds8AhRPAIAtYi6o8+mjGQBtGVdOuQ4SbH2+2OOMLz x3YtAcjUjhArg8gUSjZRPIXfbHHy6vSiYKBPBqJUPmUBRw009VsCNO1F58b1sXJb YVmX6cKwfcq97GFqBBp+CsKEyJsJaubIReXQOoJTRrPVHqqn4aWmYOk1UHQiN5Pw iXNFJQbV/bh0jrgk5W5bcOS+WyvwSQm0aK8SMsHnVY4gh73md6kcD1rybc3S5doC +WEBLMdJWteDOZMQDBVgXXUmwmzHk8eX+6cRQKe4IaXXgRSoGOAZiwy+6G7a3YYk klz7Nm1RM3vs6EmQfvoY =kwSJ -----END PGP SIGNATURE----- From listas at kurtkraut.net Tue Mar 29 22:09:31 2011 From: listas at kurtkraut.net (Kurt Kraut) Date: Tue, 29 Mar 2011 17:09:31 -0300 Subject: How to collect lines from varnishncsa only from a specific domain? Message-ID: Hi, I'm trying to use varnishncsa -c -I to collect the output of varnishncsa concerning a specific domain (e.g.: domain.com). I've attempted the following commands: varnishncsa -c -I *domain.com* varnishncsa -c -I /*domain.com/ And none of them worked. But the following command works: varnishncsa -c | grep domain.com I feel there is something odd with the varnishncsa -I command. How does this work? Thanks in advance, Kurt Kraut -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 04:43:43 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 29 Mar 2011 23:43:43 -0300 Subject: varnish as traffic director Message-ID: hi! is there any way to use varnish to direct my traffic to different backends depending on the requested url? so for example I would have 2 different backends: - search-backend - items-backend if the requested url is /search I want to direct the traffic to search-backend and if the requested url is /items I want to direct the traffic to items-backend is this a common use case for varnish or I am trying to accomplish something that should be done using something else? thanks! jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Wed Mar 30 04:52:35 2011 From: straightflush at gmail.com (AD) Date: Tue, 29 Mar 2011 22:52:35 -0400 Subject: varnish as traffic director In-Reply-To: References: Message-ID: sub vcl_recv { if (req.url ~ "^/search") { set req.backend = search-backend; } elseif (req.url ~ "^/items") { set req.backend = items-backend; } } On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > hi! is there any way to use varnish to direct my traffic to different > backends depending on the requested url? > so for example I would have 2 different backends: > - search-backend > - items-backend > > if the requested url is /search I want to direct the traffic to > search-backend > and if the requested url is /items I want to direct the traffic to > items-backend > > is this a common use case for varnish or I am trying to accomplish > something that should be done using something else? > > thanks! > > jonathan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 05:01:13 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Wed, 30 Mar 2011 00:01:13 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! If I have 100 of different rules, I would have a very big if block, right? Is this a common use case for varnish? On Tue, Mar 29, 2011 at 11:52 PM, AD wrote: > sub vcl_recv { > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > >> hi! is there any way to use varnish to direct my traffic to different >> backends depending on the requested url? >> so for example I would have 2 different backends: >> - search-backend >> - items-backend >> >> if the requested url is /search I want to direct the traffic to >> search-backend >> and if the requested url is /items I want to direct the traffic to >> items-backend >> >> is this a common use case for varnish or I am trying to accomplish >> something that should be done using something else? >> >> thanks! >> >> jonathan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 30 08:32:12 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 30 Mar 2011 08:32:12 +0200 Subject: 2.0.5 -> 2.1.5 In-Reply-To: (Sam Crawford's message of "Mon, 28 Mar 2011 16:06:35 +0100") References: Message-ID: <87r59pt8yb.fsf@qurzaw.varnish-software.com> ]] Sam Crawford | The only change that may catch you out is that obj.* changed to | beresp.* from 2.0.x to 2.1.x. Regexes also changed from case-insensitive to case-sensitive and we switched from POSIX regexes to PCRE, which might be important as well. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From martin.boer at bizztravel.nl Tue Mar 29 11:56:48 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 29 Mar 2011 11:56:48 +0200 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D91ACE0.7020008@bizztravel.nl> Hi Jonathan, What you could do is something like; backend squid_1 { ... } backend backend_1 { ... } director prefer_squid random { .retries = 1; { .backend = squid_1 .weight = 250; } { .backend = backend_1; .weight = 1; } } This will make sure varnish will retrieve data from the squids mostly and gives you the chance to do the migration in your own time. Regards, Martin On 03/25/2011 11:55 AM, Jonathan Matthews wrote: > On 21 March 2011 15:08, Jonathan Matthews wrote: >> Hi all - >> >> I've got some long-running squid instances, mainly used for caching >> medium-sized binaries, which I'd like to replace with some varnish >> instances. The binaries are quite heavy to regenerate on the distant >> origin servers and there's a large number of them. Hence, I'd like to >> use the squid cache as a target to warm a (new, nearby) varnish >> instance instead of just pointing the varnish instance at the remote >> origin servers. >> >> The squid instances are running in proxy mode, and require (I >> *believe*) an HTTP CONNECT. I've looked around for people trying the >> same thing, but haven't come across any success stories. I'm >> perfectly prepared to be told that I simply have to reconfigure the >> squid instances in mixed proxy/origin-server mode, and that there's no >> way around it, but I thought I'd ask the list for guidance first ... >> >> Any thoughts? > Anyone? All opinions welcome ... :-) > From ronny.ostman at apberget.se Wed Mar 30 08:35:10 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 08:35:10 +0200 Subject: Using varnish to cache remote content Message-ID: Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 30 08:52:03 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 29 Mar 2011 23:52:03 -0700 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Yes, what you describe is impossible. Since all of these requests are handled on the client/browser side, you can't effect them. The only way would be to either A) configure the user to proxy through your varnish for specific domains (ugly), or B) filter the user's DNS and replace flickr.com etc with the IP of your varnish cache (even uglier). Neither is possible with general internet traffic. Not a Varnish thing, but in theory you could modify your backends to rewrite external URLs that they emit as http://your_varnish.com/flickr.com/real_file(instead of http://flickr.com/real_file) and then have Varnish perform cache magick on that rewritten URL. But start talking SSL and it all goes sideways. And this assumes you wanted only to proxy external URLs that your site is emitting. If there's a glimmer of possibility, it's a really ugly glimmer. ;-) -- kb On Tue, Mar 29, 2011 at 23:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really > found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I > want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not > sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" > requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own > backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go > through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 30 08:51:43 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 08:51:43 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: You cannot do that with varnish, or with anything else :) Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronny ?stman Sent: Wednesday, March 30, 2011 8:35 AM To: varnish-misc at varnish-cache.org Subject: Using varnish to cache remote content Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronny.ostman at apberget.se Wed Mar 30 09:16:06 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 09:16:06 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Thank's for your answers! :) I figured it was more or less impossible. > >If there's a glimmer of possibility, it's a really ugly glimmer. ;-) Luckily ugly workarounds is my speciality! ;) > > But I guess a pretty resonable solution for caching all the content I want from the domains that I control on the same varnish installation is to point e.g. www.mydomain and static.mydomain to my varnish server and "route" traffic using several backends? Regards, Ronny > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 30 09:23:13 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 30 Mar 2011 09:23:13 +0200 Subject: varnish as traffic director In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky wrote: > Thanks! > If I have 100 of different rules, I would have a very big if block, right? > Is this a common use case for varnish? Yes. It's quite common to have a lot of logic. Don't worry about it, VCL is executed at light speed. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From geoff at uplex.de Wed Mar 30 10:06:06 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 30 Mar 2011 10:06:06 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: References: Message-ID: <4D92E46E.8010101@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 10:09 PM, Kurt Kraut wrote: > > I'm trying to use varnishncsa -c -I to collect the output of varnishncsa > concerning a specific domain (e.g.: domain.com). I've attempted the > following commands: > > varnishncsa -c -I *domain.com* > varnishncsa -c -I /*domain.com/ > > And none of them worked. But the following command works: > > varnishncsa -c | grep domain.com The -I flag for varnishncsa and other tools does regex matching, not globbing with the '*' wildcard. But if it's enough for you to just match the string domain.com, you don't need anything else: varnishncsa -c -I domain.com Just like for grep. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkuRuAAoJEOUwvh9pJNURuPYQAI51SjWQXEzicJtgjpMW5R7c rk88lflgWqCyo3390qvd3eA2YAW7JIPSsOcFhLwFSb1/OxLHqmn5lIy2y/gTiJbV kd9yoVMomwWyH0vAr3F1L3iW7HwTLMoz/F9nXNBYRYhbAUaAs9ESrEIiPqD3SYP8 Z1Py1DwtEhiVfJ8X3yYEfVEef9B60Zn1Y3czrQ75m+i9mvljMWxCa2kL/IgKVTe+ MlwQA4wPni+qxTJoC5wwZNLh6FHRtl2F6OQUJrm0bBjt97tw8Ul+1DLFUjHCY6vl lPTXQFflqDwaBo4kiPHRgpKHvmFpcwNZokYeZ9bQgB8ds+fJCx4DBI/t40pUCRiB gJT5AKfFCiFyu/HdC4vGMqXrt22wn9yriHUhTI8qnbHRj939wFkoBix3XrjjVKSW 4Ma1kaET3tTJILtz4xhAVhQPOb4HEuoY5otcTrUS+Ix5aEQwsjFsEVkzS7mh8RLc OtNe8t0JEmLm7SdgFSit7RO/i0dPRyL4ih8duB1PIKeJxys8nSIQvODzban7k9Oa rrQVsplPLmHjngUBoDxNkyc1yo7s6OjsVO7seVxjvgOSoWdmgnOEG3oVvF9uZ2Jd tFPl7gfBM6eHR8owLgUscuaQGVbRr00om5Y3RSX/MGqPZwQR8Dy/X6YJI0DIAZfQ Mj2/uaZ7Uw4lHJFRxhkL =Lbj3 -----END PGP SIGNATURE----- From diego.roccia at subito.it Wed Mar 30 10:51:24 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 10:51:24 +0200 Subject: Varnish stuck on most served content Message-ID: <4D92EF0C.40204@subito.it> Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: stats.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: default.vcl URL: From traian.bratucu at eea.europa.eu Wed Mar 30 10:59:08 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 10:59:08 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. Traian -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia Sent: Wednesday, March 30, 2011 10:51 AM To: varnish-misc at varnish-cache.org Subject: Varnish stuck on most served content Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego From diego.roccia at subito.it Wed Mar 30 11:10:35 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 11:10:35 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: <4D92EF0C.40204@subito.it> Message-ID: <4D92F38B.6090900@subito.it> Hi Traian, Thanks for your interest. The problem is that it's a random issue. I noticed it as I'm using some commercial tools (keynote and gomez) to monitor website performances and I notice some out of average point in the scatter time graph. Experiencing it locally is really hard. On 03/30/2011 10:59 AM, Traian Bratucu wrote: > Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". > You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. > > Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. > > Traian > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia > Sent: Wednesday, March 30, 2011 10:51 AM > To: varnish-misc at varnish-cache.org > Subject: Varnish stuck on most served content > > Hi Guys, > This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: > > +---------+ +---------+ +---------+ +---------+ > | VARNISH | | VARNISH | | VARNISH | | VARNISH | > +---------+ +---------+ +---------+ +---------+ > | | | | > +------------+------------+------------+ > | | > +------+-+ +--+-----+ > | APACHE | | APACHE | > +--------+ +--------+ > > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. > > It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. > > It's really weird, because css should be always cached. > > I'm running Centos 5.5 64bit and here's my varnish startup parameters: > > DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T 0.0.0.0:6082 \ > -t 604800 \ > -u varnish -g varnish \ > -s malloc,54G \ > -p thread_pool_add_delay=2 \ > -p thread_pools=16 \ > -p thread_pool_min=50 \ > -p thread_pool_max=4000 \ > -p listen_depth=4096 \ > -p lru_interval=600 \ > -hclassic,500009 \ > -p log_hashstring=off \ > -p shm_workspace=16384 \ > -p ping_interval=2 \ > -p default_grace=3600 \ > -p pipe_timeout=10 \ > -p sess_timeout=6 \ > -p send_timeout=10" > > In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? > > In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. > > Hope someone can help me > Thanks in advance > Diego > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From pom at dmsp.de Wed Mar 30 11:59:03 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Wed, 30 Mar 2011 11:59:03 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: <4D92E46E.8010101@uplex.de> References: <4D92E46E.8010101@uplex.de> Message-ID: <4D92FEE7.2030305@dmsp.de> Am 30.03.2011 10:06, schrieb Geoff Simmons: > On 03/29/11 10:09 PM, Kurt Kraut wrote: >> I'm trying to use varnishncsa -c -I to collect the output of varnishncsa >> concerning a specific domain (e.g.: domain.com). I've attempted the >> following commands: >> >> varnishncsa -c -I *domain.com* >> varnishncsa -c -I /*domain.com/ >> >> And none of them worked. But the following command works: >> >> varnishncsa -c | grep domain.com > The -I flag for varnishncsa and other tools does regex matching, not > globbing with the '*' wildcard. > > But if it's enough for you to just match the string domain.com, you > don't need anything else: > > varnishncsa -c -I domain.com > > Just like for grep. > > > Best, > Geoff I have to support Kurt ;-) Have the same problem (still moved it to the stack of 'unsolved stuff' so far...) varnishncsa -I is either not or at least working strange... Example: aurora ~ # varnishncsa -c XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:16 +0200] "GET http://www.annuna.net/ HTTP/1.0" 200 791 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/StyleSheet.css HTTP/1.0" 200 2876 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/img/Logo_Annuna_850x680_weiss.jpg HTTP/1.0" 200 31986 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" aurora ~ # varnishncsa -c -I annuna or aurora ~ # varnishncsa -c -I "^.*annuna.*$" From my understanding it should at least match any line containing the character string "annuna"? But it doesn't... Am I doint it wrong? ^^ Wondering... Stefan -- Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching http://www.dmsp.de From stewsnooze at gmail.com Wed Mar 30 08:40:49 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Wed, 30 Mar 2011 07:40:49 +0100 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: <191AFFC3-B260-40FB-9AD7-CABBBF9F4E8B@gmail.com> Hi, You can only cache items where the DNS record for those sites points at the server/infrastructure where you are running Varnish. You could do something crazy like have flickr.mydomain.com referenced in your HTML pages which is configured in Varnish to use flickr.com as a backend. Personally I think this is a bit strange but it is possible. You need to think about why you are caching external stuff in Varnish and whether you are allowed to? Stew On 30 Mar 2011, at 07:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampe at hauke-lampe.de Wed Mar 30 18:27:00 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 30 Mar 2011 18:27:00 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: <4D9359D4.6080102@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30.03.2011 10:51, Diego Roccia wrote: > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, > running varnish 2.1.6 (Updated from 2.0.5 1 month ago). varnish 2.1.6 hasn't been released, yet, AFAIK. > It's happening often that they freeze serving contents. and I noticed a > common pattern: the content that get stuck is always one of the most > served, like a css or js file, or some component of the page layout, Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does it eventually deliver the content after the same period of time? There was a bug in 2.1.4 that could lead to the symptoms you describe. If the client sent an If-Modified-Since: header and the backend returned a 304 response, varnish would wait on the backend connection until "first_byte_timeout" elapsed. In that case, the following VCL code helps: sub vcl_pass { unset bereq.http.if-modified-since; unset bereq.http.if-none-match; } http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl See also this thread: http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 =BTfX -----END PGP SIGNATURE----- From diego.roccia at subito.it Thu Mar 31 10:33:02 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Thu, 31 Mar 2011 10:33:02 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D9359D4.6080102@hauke-lampe.de> References: <4D92EF0C.40204@subito.it> <4D9359D4.6080102@hauke-lampe.de> Message-ID: <4D943C3E.5070903@subito.it> On 03/30/2011 06:27 PM, Hauke Lampe wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > On 30.03.2011 10:51, Diego Roccia wrote: > >> Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, >> running varnish 2.1.6 (Updated from 2.0.5 1 month ago). > > varnish 2.1.6 hasn't been released, yet, AFAIK. Sorry, I meant varnish-2.1.5 (SVN 0843d7a), the version from official rpm repository > >> It's happening often that they freeze serving contents. and I noticed a >> common pattern: the content that get stuck is always one of the most >> served, like a css or js file, or some component of the page layout, > > Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does > it eventually deliver the content after the same period of time? Doesn't seems to be a constant time. the same varnish provides tens of elements per page, and sometimes it gets stuck on one of them. It's always a css or js. There are no rules in the vcl specific to these kind of files. so the only common pattern I see is that they're the files it has to serve most times. > There was a bug in 2.1.4 that could lead to the symptoms you describe. > If the client sent an If-Modified-Since: header and the backend returned > a 304 response, varnish would wait on the backend connection until > "first_byte_timeout" elapsed. > I don't think it's receiving the If-Modified-Since , as we're talking about website monitoring tools, and they are configured to start cache cleared every time. > In that case, the following VCL code helps: > > sub vcl_pass { > unset bereq.http.if-modified-since; > unset bereq.http.if-none-match; > } > http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl > > See also this thread: > http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 > > > > Hauke. > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP > 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 > =BTfX > -----END PGP SIGNATURE----- From mhettwer at team.mobile.de Thu Mar 31 11:15:06 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 10:15:06 +0100 Subject: Varnish stuck on most served content In-Reply-To: <4D92F38B.6090900@subito.it> Message-ID: Hi Diego, Please try to avoid top posting. On 30.03.11 11:10, "Diego Roccia" wrote: >Hi Traian, > Thanks for your interest. The problem is that it's a random issue. I >noticed it as I'm using some commercial tools (keynote and gomez) to >monitor website performances and I notice some out of average point in >the scatter time graph. Experiencing it locally is really hard. We are using gomez to let them monitor some of our important pages from remote. What we usually do if we see spikes is, to dig in and find out were the time is spent. In your example, if it's gomez, click in and check. Is it first byte time? DNS time? Content delivery time? With regards to how to debug that. I second the question to the list. My usual procedure in a setup of Apache-->Tomcat-->SomeBackends, I'll go and dig into the access logs of all components and try to figure out who is spending the time to deliver. However, with varnish in front of apaches, I usually don't have a logfile which tells me "varnish thinks it spend xx ms to deliver this request". I know that theres varnishncsa, but I dunno whether it logs away the processing time of a request (%D in Apache LogFormat IIRC). You might try and really use varnishlog to log away requests to js and css files. However, you might grow some really huge files there... Hard to parse them ;) >> >> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >> >> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >> -f ${VARNISH_VCL_CONF} \ >> -T 0.0.0.0:6082 \ >> -t 604800 \ >> -u varnish -g varnish \ >> -s malloc,54G \ >> -p thread_pool_add_delay=2 \ >> -p thread_pools=16 \ >> -p thread_pool_min=50 \ >> -p thread_pool_max=4000 \ >> -p listen_depth=4096 \ >> -p lru_interval=600 \ >> -hclassic,500009 \ >> -p log_hashstring=off \ >> -p shm_workspace=16384 \ >> -p ping_interval=2 \ >> -p default_grace=3600 \ >> -p pipe_timeout=10 \ >> -p sess_timeout=6 \ >> -p send_timeout=10" Hu. What are all those "-p" parameters? Looks like some heavy tweaking to me. Perhaps some varnish gurus might shime in, but to me tuning like that sounds like trouble. Unless you really know what you did there. I wouldn't (not without the documentation at hands). Cheers, Marian From geoff at uplex.de Thu Mar 31 11:40:24 2011 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 31 Mar 2011 11:40:24 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: Message-ID: <4D944C08.7040804@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/31/11 11:15 AM, Hettwer, Marian wrote: >>> >>> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >>> >>> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>> -f ${VARNISH_VCL_CONF} \ >>> -T 0.0.0.0:6082 \ >>> -t 604800 \ >>> -u varnish -g varnish \ >>> -s malloc,54G \ >>> -p thread_pool_add_delay=2 \ >>> -p thread_pools=16 \ >>> -p thread_pool_min=50 \ >>> -p thread_pool_max=4000 \ >>> -p listen_depth=4096 \ >>> -p lru_interval=600 \ >>> -hclassic,500009 \ >>> -p log_hashstring=off \ >>> -p shm_workspace=16384 \ >>> -p ping_interval=2 \ >>> -p default_grace=3600 \ >>> -p pipe_timeout=10 \ >>> -p sess_timeout=6 \ >>> -p send_timeout=10" > > Hu. What are all those "-p" parameters? Looks like some heavy tweaking to > me. > Perhaps some varnish gurus might shime in, but to me tuning like that > sounds like trouble. > Unless you really know what you did there. > > I wouldn't (not without the documentation at hands). Um. Many of those -p's are roughly in the ranges recommended on the Wiki performance page, and on Kristian Lyngstol's blog. http://www.varnish-cache.org/trac/wiki/Performance http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/ Perhaps one of the settings is causing a problem, but it isn't wrong to be doing it all -- and it's quite necessary on a high-traffic site. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNlEwIAAoJEOUwvh9pJNUR0tYP/2M9LpET5mj3OQiMu2Bym1JD iTn2eckasyQRwPzXvDhCZNFRHJDV8aO2wUZ3XqMFsty05FKgPUhoLgJZ9wAoaBXZ oVVr34G4b33SFlVAxfvskHrEp83F0cY5Gb6W/JP2Oj/SzpHEM3elT+8tTFXjgngB F463EiGcikdSdQ5PaMGfTva9JZP6QI+K0IYW4walCPSsz829yQ6I6e5eIDCECiFq BhJMcXdvATWOHg5LfcRUOlcQJJFPl0mzT/Y2zq/hgdImjZ5NLU87xhjFD8twKOVZ Rju8u2Cz6Pl9HHNyVTV5W2fNmIE3J1o972JseHz4wFNoEJQtzTtyEGADE2u2bXH9 Blbor4J1bmERUSyFvH9Brhe1+4Rs5IOtGFCGrEzpxtY+QiqCIkdxmCCl5/EhQlRl eJZMkN3eaXvGgrHHASxM7e2UoIFm0XrBJW5N01Bu6dA/EH6jLowwEmU6OeLkKUSF DLIgAeKt1ECrVU23b9zFfiZSQwMTKB7KumrJoeDrUtSuWVIWdz83thaD0MI6ucxD 62CIPkR7W5zDxSDQ0A/AnXrkZpe8sLP9Z/DgcHA8rSX39zqxJae44OnU56fU07zc 440P+GeT6j5MoKAa1gCxSDAVr7MnDB3B82Y8fZaUFWB1rT1tI/B7VB5dhVwFoEi2 ucD3QwucEs7bpLrKyiwo =3vMe -----END PGP SIGNATURE----- From dan at retrobadger.net Thu Mar 31 13:39:51 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 12:39:51 +0100 Subject: Learning C and VRT_SetHdr() Message-ID: <4D946807.8010900@retrobadger.net> I would like to do some more advanced functionality within our VCL file, and to do so need to use the inline-c capabilities of varnish. So, to start off with, I thought I would get my VCL file to set the headers, so I can test variables, and be sure it is working. But am getting a WSOD when I impliment my seemingly simple code. So my main questions are: * Are there any good docs on the VRT variables * Are there examples/tutorials on using C within the VCL (for beginners to the subject) * Is there something obvious I have overlooked when writing my code The snipper from my current code is: /sub detectmobile {/ / C{/ / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end);/ / }C/ /}/ /sub vcl_miss {/ / call detectmobile;/ / return(fetch);/ /}/ /sub vcl_pipe {/ / call detectmobile;/ / return(pipe);/ /}/ /sub vcl_pass {/ / call detectmobile;/ / return(pass);/ /}/ Thanks for your advice, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Thu Mar 31 13:58:26 2011 From: straightflush at gmail.com (AD) Date: Thu, 31 Mar 2011 07:58:26 -0400 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: use -C to display the default VCL , or just put in a command you want to do in C inside the vcl and then see how it looks when running -C -f against the config. On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: > I would like to do some more advanced functionality within our VCL file, > and to do so need to use the inline-c capabilities of varnish. So, to start > off with, I thought I would get my VCL file to set the headers, so I can > test variables, and be sure it is working. But am getting a WSOD when I > impliment my seemingly simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for beginners to > the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > *sub detectmobile {* > * C{* > * VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);* > * }C* > *}* > *sub vcl_miss {* > * call detectmobile;* > * return(fetch);* > *}* > *sub vcl_pipe {* > * call detectmobile;* > * return(pipe);* > *}* > *sub vcl_pass {* > * call detectmobile;* > * return(pass);* > *}* > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at streppone.it Thu Mar 31 13:58:54 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 31 Mar 2011 22:58:54 +1100 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > The snipper from my current code is: Is it correct to do this in all vcl_miss, pipe and pass? What about vcl_hit then? I would have expected this to happen in vcl_deliver() or vcl_fetch() if you want your backends to see the header you're setting. Anyway... > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ I believe you have a problem in the "\020" bit. That is octal notation. "X-Whatever:" is 11 bytes, including the ':', so you need to write: "\013X-Whatever:" Have fun, -- Cosimo From ttischler at homeaway.com Wed Mar 30 20:46:43 2011 From: ttischler at homeaway.com (Tim Tischler) Date: Wed, 30 Mar 2011 13:46:43 -0500 Subject: varnish as traffic director In-Reply-To: Message-ID: We first started using varnish for caching during a superbowl advertisement, and then when we no longer needed the caching, we keep using it as our load balancer. We're now using it as a A/B testing system between static builds with a number of different rules. We've written a ruby DSL to generate the common rules and inject the the GUID hashes that uniquely identify the A vs. the B builds. We are also routing path prefixes to various additional applications. I've been extremely happy with the speed, the stability, and the flexibility of varnish as a load balancer/content switch, even without caching. -T On 3/30/11 2:23 AM, "Per Buer" wrote: > On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky > wrote: >> Thanks! >> If I have 100 of different rules, I would have a very big if block, right? >> Is this a common use case for varnish? > > Yes. It's quite common to have a lot of logic. Don't worry about it, > VCL is executed at light speed. From mhettwer at team.mobile.de Thu Mar 31 14:48:42 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 13:48:42 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: Hej there, On 30.03.11 04:52, "AD" wrote: >sub vcl_recv { > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > >} By the way, would it also be okay to write it like that? sub vcl_recv { set req.backend = catchall-backend; if (req.url ~ "^/search") { set req.backend = search-backend; } if (req.url ~ "^/items") { set req.backend = items-backend; } } Obviously with the addition of the catchall-backend. Cheers, Marian From rtshilston at gmail.com Thu Mar 31 14:53:49 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Thu, 31 Mar 2011 13:53:49 +0100 Subject: varnish as traffic director In-Reply-To: References: Message-ID: > > On 30.03.11 04:52, "AD" wrote: > >> sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } Logically it's ok. But, it's probably slightly better in terms of efficiency to use an elseif pattern. This is you'll do the first pattern match (/search) on every request, and then you'll also do the pattern match for items, even if you'd already matched to /search. Two pattern matches rather than one is undesirable, and even more so if you ended up having lots and lots of matches to do. Rob From ionathan at gmail.com Thu Mar 31 14:59:33 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Thu, 31 Mar 2011 09:59:33 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! It is great to know about real life implementations. Do you have any good way to test rules in your dev env? Is there any benchmark on varnish vs. nginx in regard of load balancing? On 3/31/11, Hettwer, Marian wrote: > Hej there, > > > > > On 30.03.11 04:52, "AD" wrote: > >>sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >>} > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > > Obviously with the addition of the catchall-backend. > > Cheers, > Marian > > From mhettwer at team.mobile.de Thu Mar 31 15:06:09 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:06:09 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:53, "Robert Shilston" wrote: >> >> On 30.03.11 04:52, "AD" wrote: >> >>> sub vcl_recv { >>> if (req.url ~ "^/search") { >>> set req.backend = search-backend; >>> } >>> elseif (req.url ~ "^/items") { >>> set req.backend = items-backend; >>> } >>> >>> } >> >> By the way, would it also be okay to write it like that? >> >> sub vcl_recv { >> >> set req.backend = catchall-backend; >> >> >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> if (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > >Logically it's ok. But, it's probably slightly better in terms of >efficiency to use an elseif pattern. This is you'll do the first pattern >match (/search) on every request, and then you'll also do the pattern >match for items, even if you'd already matched to /search. Two pattern >matches rather than one is undesirable, and even more so if you ended up >having lots and lots of matches to do. Understood! Thanks for your explanation :-) Cheers, Marian From mhettwer at team.mobile.de Thu Mar 31 15:09:22 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:09:22 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:59, "Jonathan Leibiusky" wrote: >Thanks! It is great to know about real life implementations. >Do you have any good way to test rules in your dev env? >Is there any benchmark on varnish vs. nginx in regard of load balancing? Some Real-Life numbers. We have 4 varnishes deployed in front of a big german classifieds site. Each varnish is doing 1200 requests/second and according to munin, each machine is nearly idle. (cpu load at 4% out of 800%). Hardware is HP blades with 8 cores and 8 gig ram. I wouldn't bother to try out nginx. If nginx is in the same league like varnish, I probably couldn't spot a difference anyway ;-) Besides, I'm really happy with varnish. Sorry, no real-life infos about nginx here... Regards, Marian From dan at retrobadger.net Thu Mar 31 15:25:13 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:25:13 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D9480B9.4060101@retrobadger.net> On 31/03/11 12:58, Cosimo Streppone wrote: > On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > >> The snipper from my current code is: > > Is it correct to do this in all > vcl_miss, pipe and pass? > What about vcl_hit then? > > I would have expected this to happen in vcl_deliver() > or vcl_fetch() if you want your backends to see > the header you're setting. > > Anyway... > >> /sub detectmobile {/ >> / C{/ >> / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end);/ >> / }C/ > > I believe you have a problem in the "\020" bit. > That is octal notation. > > "X-Whatever:" is 11 bytes, including the ':', > so you need to write: > > "\013X-Whatever:" > > Have fun, > Sadly no luck with that, I have ammended my code as recommended. Varnish is still able to restart without errors, but WSOD on page load. My custom function is now something: sub detectmobile { C{ VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end); }C } And the only occurance of 'call detectmobile;' is in: sub vcl_deliver {} Are there any libraries required for the VRT scripts to work? Do I need to alter the /etc/varnish/default file for C to work in varnish? From dan at retrobadger.net Thu Mar 31 15:28:21 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:28:21 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D948175.3000503@retrobadger.net> On 31/03/11 12:58, AD wrote: > use -C to display the default VCL , or just put in a command you want > to do in C inside the vcl and then see how it looks when running -C -f > against the config. > > > On Thu, Mar 31, 2011 at 7:39 AM, Dan > wrote: > > I would like to do some more advanced functionality within our VCL > file, and to do so need to use the inline-c capabilities of > varnish. So, to start off with, I thought I would get my VCL file > to set the headers, so I can test variables, and be sure it is > working. But am getting a WSOD when I impliment my seemingly > simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for > beginners to the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ > /}/ > /sub vcl_miss {/ > / call detectmobile;/ > / return(fetch);/ > /}/ > /sub vcl_pipe {/ > / call detectmobile;/ > / return(pipe);/ > /}/ > /sub vcl_pass {/ > / call detectmobile;/ > / return(pass);/ > /}/ > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish file? Is this required to use inline-c within my vcl file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Thu Mar 31 20:09:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 31 Mar 2011 11:09:24 -0700 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D9480B9.4060101@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D9480B9.4060101@retrobadger.net> Message-ID: On Thu, Mar 31, 2011 at 06:25, Dan wrote: > Sadly no luck with that, I have ammended my code as recommended. Varnish > is still able to restart without errors, but WSOD on page load. My custom > function is now something: > The length of your header is 20 characters including the colon. 013 is the length (in octal) of the X-Whatever: example provided to explain this to you, it is not octal for 20. Replace 013 with 024 to avoid segfaults. There are docs covering this on the website, BTW. What on earth is a WSOD? -- kb > > sub detectmobile { > C{ > VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end); > }C > } > > And the only occurance of 'call detectmobile;' is in: > sub vcl_deliver {} > > Are there any libraries required for the VRT scripts to work? > > Do I need to alter the /etc/varnish/default file for C to work in varnish? > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thebog at gmail.com Thu Mar 31 21:46:33 2011 From: thebog at gmail.com (thebog) Date: Thu, 31 Mar 2011 21:46:33 +0200 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D948175.3000503@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D948175.3000503@retrobadger.net> Message-ID: I think he meant -C in the commandline. Not inside the file. YS Anders Berg On Thu, Mar 31, 2011 at 3:28 PM, Dan wrote: > On 31/03/11 12:58, AD wrote: > > use -C to display the default VCL , or just put in a command you want to do > in C inside the vcl and then see how it looks when running -C -f against the > config. > > On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: >> >> I would like to do some more advanced functionality within our VCL file, >> and to do so need to use the inline-c capabilities of varnish.? So, to start >> off with, I thought I would get my VCL file to set the headers, so I can >> test variables, and be sure it is working.? But am getting a WSOD when I >> impliment my seemingly simple code. >> >> >> So my main questions are: >> * Are there any good docs on the VRT variables >> * Are there examples/tutorials on using C within the VCL (for beginners to >> the subject) >> * Is there something obvious I have overlooked when writing my code >> >> >> The snipper from my current code is: >> sub detectmobile { >> ? C{ >> ??? VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end); >> ? }C >> } >> sub vcl_miss { >> ? call detectmobile; >> ? return(fetch); >> } >> sub vcl_pipe { >> ? call detectmobile; >> ? return(pipe); >> } >> sub vcl_pass { >> ? call detectmobile; >> ? return(pass); >> } >> >> >> Thanks for your advice, >> Dan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish > file?? Is this required to use inline-c within my vcl file? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From gojomo at archive.org Tue Mar 1 01:09:04 2011 From: gojomo at archive.org (Gordon Mohr) Date: Mon, 28 Feb 2011 16:09:04 -0800 Subject: Practical VCL limits; giant URL->backend map Message-ID: <4D6C3920.6030708@archive.org> The quite-possibly-nutty idea has occurred to me of auto-generating a VCL that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of what are effectively 621 backend locations. (The mapping is essentially arbitrary.) Essentially, it would be replacing a squid url_rewrite_program. Am I likely to hit any hard VCL implementation limits (in depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) if my VCL is ~100-200MB in size? Am I overlooking some other more simple way to have varnish consult an arbitrary mapping (something similar to a squid url_rewrite_program)? Thanks for any warnings/ideas. - Gordon From perbu at varnish-software.com Tue Mar 1 08:45:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 1 Mar 2011 08:45:03 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On Tue, Mar 1, 2011 at 1:09 AM, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) Wow. > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? Yeah. Take a look at the DNS director. Just put your backends in a zone, point Varnish at it and Bob's your uncle. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From pom at dmsp.de Tue Mar 1 09:09:35 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Tue, 01 Mar 2011 09:09:35 +0100 Subject: Release date varnish 3.0? Message-ID: <4D6CA9BF.3060406@dmsp.de> Hi all, I am curious and maybe I missed something, but is there a planned release date for varnish 3.0? Have a nice tuesday, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From phk at phk.freebsd.dk Tue Mar 1 09:53:05 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 01 Mar 2011 08:53:05 +0000 Subject: Release date varnish 3.0? In-Reply-To: Your message of "Tue, 01 Mar 2011 09:09:35 +0100." <4D6CA9BF.3060406@dmsp.de> Message-ID: <12068.1298969585@critter.freebsd.dk> In message <4D6CA9BF.3060406 at dmsp.de>, Stefan Pommerening writes: >Hi all, > >I am curious and maybe I missed something, but is there a planned = >release date for varnish 3.0? It depends a lot on how much we can get people to test the code first. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Tue Mar 1 10:36:47 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 01 Mar 2011 10:36:47 +0100 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> (Gordon Mohr's message of "Mon, 28 Feb 2011 16:09:04 -0800") References: <4D6C3920.6030708@archive.org> Message-ID: <87mxlfrxlc.fsf@qurzaw.varnish-software.com> ]] Gordon Mohr | Am I likely to hit any hard VCL implementation limits (in | depth-of-conditional-nesting, overall size, VCL compilation overhead, | etc.) if my VCL is ~100-200MB in size? It will probably take a while to compile. Somebody at VUG3 mentioned a 50MB VCL for similar reasons and it took a little bit to compile and he wanted some evil hacks to be able to distribute the compiled VCL due to that, but apart from that, I believe it worked well enough. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From jbooher at praxismicro.com Tue Mar 1 18:18:31 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Tue, 1 Mar 2011 12:18:31 -0500 Subject: Varnish Cache on multi account VPS Message-ID: Curious as to weather the varnish cache can be restricted to use on only one account in CPanel? Jeff Booher p 440.549.0049 | f: 440.549.5695 www.praxismicro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssm at redpill-linpro.com Wed Mar 2 00:07:05 2011 From: ssm at redpill-linpro.com (Stig Sandbeck Mathisen) Date: Wed, 02 Mar 2011 00:07:05 +0100 Subject: Varnish Cache on multi account VPS In-Reply-To: (Jeff Booher's message of "Tue, 1 Mar 2011 12:18:31 -0500") References: Message-ID: <87wrkio2xy.fsf@mavis.fnord.no> Jeff Booher writes: > Curious as to weather the varnish cache can be restricted to use on > only one account in CPanel? I think you may want to supply a bit more context for your question. It is hard to figure out exactly what you mean. -- Stig Sandbeck Mathisen Redpill Linpro AS From lampe at hauke-lampe.de Wed Mar 2 00:13:47 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 00:13:47 +0100 Subject: caching of restarted requests possible? Message-ID: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Hi. I have a virtual host "images.example.com" served from two backends: - a backend "archive" which contains the bulk of images on fast read-only storage - a backend "updates" holding additions and updates A request for /foo.jpg should check the update backend first, even if the image was previously cached from the archive backend. A 404 status from the update backend restarts the request and fetches the image from the archive backend or delivers a cached copy. My code so far is at: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl It basically does what I want, but because the update backend's 404 is not stored when vcl_fetch returns restart, it sends a backend query for every request. I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" For now, I use the original URL and Host: header as hash key and reduced the cache TTL. That works well enough, even though it produces more traffic on the backends than would be necessary. Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? Hauke From l.barszcz at gadu-gadu.pl Wed Mar 2 08:13:30 2011 From: l.barszcz at gadu-gadu.pl (=?UTF-8?B?xYF1a2FzeiBCYXJzemN6IC8gR2FkdS1HYWR1?=) Date: Wed, 02 Mar 2011 08:13:30 +0100 Subject: caching of restarted requests possible? In-Reply-To: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> Message-ID: <4D6DEE1A.80609@gadu-gadu.pl> Hi, On 02.03.2011 00:13, Hauke Lampe wrote: > I'd like to cache the 404 response for some time and immediately lookup the object under the next backend's hash key, so the update backend is only queried again after the TTL of the 404 object expires. > > I figure that even if varnish would cache the request before restart, it would probably not go through vcl_fetch next time. I tried setting a magic header in vcl_fetch and restart the request in vcl_deliver. varnish didn't like that and died with "INCOMPLETE AT: cnt_deliver(196)" Check out patch attached to ticket http://varnish-cache.org/trac/ticket/412 which changes behavior to what you need. > Is there any way to remember the previous request status on restart and use it for backend selection in vcl_recv? You can store data custom header in req, like req.http.X-My-State. req.* is accessible from every function in vcl, so you can store your state in there - it persists across restarts. -- ?ukasz Barszcz web architect / Pion Aplikacji Internetowych GG Network S.A ul. Kamionkowska 45 03-812 Warszawa tel.: +48 22 514 64 99 fax.: +48 22 514 64 98 gg:16210 Sp??ka zarejestrowana w S?dzie Rejonowym dla m. st. Warszawy, XIII Wydzia? Gospodarczy KRS pod numerem 0000264575, NIP 867-19-48-977. Kapita? zak?adowy: 1 758 461,10 z? - wp?acony w ca?o?ci. From thereallove at gmail.com Tue Mar 1 17:54:32 2011 From: thereallove at gmail.com (Dan Gherman) Date: Tue, 1 Mar 2011 11:54:32 -0500 Subject: Varnish, between Zeus and Apache Message-ID: Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 2 09:09:58 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 2 Mar 2011 09:09:58 +0100 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: My guess is that Zeus may also set other headers that identify it to the apache module, and somehow get stripped by Varnish. You should check that out. Otherwise another solution may be placing Varnish in front of Zeus, if that does not affect your cluster setup. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Dan Gherman Sent: Tuesday, March 01, 2011 5:55 PM To: varnish-misc at varnish-cache.org Subject: Varnish, between Zeus and Apache Hello! I am confronting with this situation: I manage a Zeus load-balancer cluster who has Apache as a webserver on the nodes in the backend. When Zeus load-balances a connection to an Apache server or Apache-based application, the connection appears to originate from the Zeus machine.Zeus provide an Apache module to work around this. Zeus automatically inserts a special 'X-Cluster-Client-Ip' header into each request, which identifies the true source address of the request. Zeus' Apache module inspects this header and corrects Apache's calculation of the source address. This change is transparent to Apache, and any applications running on or behind Apache. Now the issue is when I have Varnish between Zeus and Apache. Varnish will always "see" the connections coming from the Zeus load-balancer. Is there a way to have a workaround, like that Apache module, so I can then send to Apache the true source address of the request? My error.log is flooded with the usual messages " Ignoring X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine 'node_ip' Thank you! --- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish at mm.quex.org Wed Mar 2 09:32:42 2011 From: varnish at mm.quex.org (Michael Alger) Date: Wed, 2 Mar 2011 16:32:42 +0800 Subject: Varnish, between Zeus and Apache In-Reply-To: References: Message-ID: <20110302083242.GA26131@grum.quex.org> On Tue, Mar 01, 2011 at 11:54:32AM -0500, Dan Gherman wrote: > I am confronting with this situation: I manage a Zeus load-balancer > cluster who has Apache as a webserver on the nodes in the backend. > When Zeus load-balances a connection to an Apache server or > Apache-based application, the connection appears to originate from the > Zeus machine.Zeus provide an Apache module to work around this. Zeus > automatically inserts a special 'X-Cluster-Client-Ip' header into each > request, which identifies the true source address of the request. > [...] > Is there a way to have a workaround, like that Apache module, so I can > then send to Apache the true source address of the request? My > error.log is flooded with the usual messages " Ignoring > X-Cluster-Client-Ip 'client_ip' from non-Load Balancer machine > 'node_ip' It sounds like Varnish is sending the headers it receives, but the Apache module only respects the X-Cluser-Client-IP header when it's received from a particular IP address(es). See if there's a way to configure the module to accept it from Varnish, i.e. as if Varnish is the load-balancer. There's probably some existing configuration which has the IP address of the Zeus load-balancer. From lampe at hauke-lampe.de Wed Mar 2 12:39:08 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 02 Mar 2011 12:39:08 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D6E2C5C.9020704@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. That looks promising, thanks. I'll give it a try. I hadn't thought of using vcl_hit to restart the request, either. That might solve the crash I encountered with restarting from within vcl_deliver. Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk1uLFYACgkQKIgAG9lfHFMeEgCfTIfBp9FzLUjj7uPDrgkSfleo q9MAn2Efxy7kmRb3uMN560zjSsih2nob =mejt -----END PGP SIGNATURE----- From l at lrowe.co.uk Wed Mar 2 15:08:03 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 2 Mar 2011 14:08:03 +0000 Subject: Practical VCL limits; giant URL->backend map In-Reply-To: <4D6C3920.6030708@archive.org> References: <4D6C3920.6030708@archive.org> Message-ID: On 1 March 2011 00:09, Gordon Mohr wrote: > The quite-possibly-nutty idea has occurred to me of auto-generating a VCL > that maps each of about 18 million artifacts (incoming URLs) to 1,2,or3 of > what are effectively 621 backend locations. (The mapping is essentially > arbitrary.) > > Essentially, it would be replacing a squid url_rewrite_program. > > Am I likely to hit any hard VCL implementation limits (in > depth-of-conditional-nesting, overall size, VCL compilation overhead, etc.) > if my VCL is ~100-200MB in size? > > Am I overlooking some other more simple way to have varnish consult an > arbitrary mapping (something similar to a squid url_rewrite_program)? > > Thanks for any warnings/ideas. With that many entries, I expect you'll find that configuration will be quite slow, as there are no index structures in VCL and it compiles down to simple procedural C code. I think you'd be better off taking the approach of integrating with an external database library for the lookup. This blog pos shows how to search for values in an xml file http://www.enrise.com/2011/02/mobile-device-detection-with-wurfl-and-varnish/ but I expect you'll see better performance using sqlite or bdb. Laurence From romain.ledisez at netensia.fr Wed Mar 2 17:46:22 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Wed, 02 Mar 2011 17:46:22 +0100 Subject: Varnish burns the CPU and eat the RAM Message-ID: <1299084382.2658.200.camel@romain.v.netensia.net> Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ -------------- next part -------------- backend customer1 { .host = "customer1.hoster.net"; .port = "80"; } sub vcl_recv { # # Normalisation des URL # # Normaliser les URL envoy?s par curls -X et LWP if( req.url ~ "^http://" ) { set req.url = regsub(req.url, "http://[^/]*", ""); } # Normaliser l'h?te (domain.tldx -> www.domain.tld) if( req.http.host == "customer1.com" || req.http.host ~ "^(www\.)?customer1\.net$" ) { set req.http.redir = "http://www.customer1.com" req.url; error 750 req.http.redir; } # # Configuration des sites # # R?gles sp?cifiques ? Customer1 if( req.http.host == "www.customer1.com" ) { set req.backend = customer1; # Supprimer l'ent?te Cookie envoy?e par le navigateur remove req.http.Cookie; # OK pour le moment (voir quand la version mobile sera OK) remove req.http.Accept; remove req.http.Accept-Language; remove req.http.Accept-Charset; remove req.http.User-Agent; } # # R?gles g?n?riques adapt?es ? tous les sites # # P?riode de gr?ce : continue de servir le contenu apr?s son expiration du cache # (par exemple le temps de refaire la requ?te vers le backend ou de le deplanter) set req.grace = 3600s; # Normaliser l'ent?te Accept-Encoding if( req.http.Accept-Encoding ) { if( req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|mp4|flv)$" ) { # Ne pas compresser les fichiers d?j? compress?s remove req.http.Accept-Encoding; } elsif( req.http.Accept-Encoding ~ "gzip" ) { set req.http.Accept-Encoding = "gzip"; } elsif( req.http.Accept-Encoding ~ "deflate" ) { set req.http.Accept-Encoding = "deflate"; } else { # Supprimer les algorithmes inconnus remove req.http.Accept-Encoding; } } # Purger l'URL du cache si elle se termine par le param?tre purge if( req.url ~ "~purge$" ) { # Suppression du suffixe "~purge" puis purge de l'URL set req.url = regsub(req.url, "(.*)~purge$", "\1"); purge_url( req.url ); } } # # Appell? apr?s rec?ption de la r?ponse du backend # sub vcl_fetch { # Supprimer l'ent?te Set-Cookie envoy?e par le serveur remove beresp.http.Set-Cookie; } # # Appell? avant l'envoi d'un contenu du cache # sub vcl_deliver { remove resp.http.Via; remove resp.http.X-Varnish; remove resp.http.Server; remove resp.http.X-Powered-By; remove resp.http.P3P; } # # "Catching" des erreurs # sub vcl_error { if( obj.status == 750 ) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } -------------- next part -------------- client_conn 136529 117.80 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 136532 117.80 Client requests received cache_hit 136531 117.80 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1 0.00 Cache misses backend_conn 1 0.00 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 1 0.00 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 7720 . N struct sess_mem n_sess 18446744073709551606 . N struct sess n_object 1 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 481 . N struct objectcore n_objecthead 481 . N struct objecthead n_smf 3 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 480 . N worker threads n_wrk_create 480 0.41 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 81 0.07 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 166 0.14 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 6 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 136429 117.71 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 136534 117.80 Total Sessions s_req 136534 117.80 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1 0.00 Total fetch s_hdrbytes 30885129 26648.08 Total header bytes s_bodybytes 41097938 35459.83 Total body bytes sess_closed 136538 117.81 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 4233973 3653.13 SHM records shm_writes 547445 472.34 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 46002 39.69 SHM MTX contention shm_cycles 1 0.00 SHM cycles through buffer sm_nreq 2 0.00 allocator requests sm_nobj 2 . outstanding allocations sm_balloc 8192 . bytes allocated sm_bfree 2574852096 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 1 0.00 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 136442 117.72 HCB Lookups without lock hcb_lock 1 0.00 HCB Lookups with lock hcb_insert 1 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1159 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From rdelsalle at gmail.com Wed Mar 2 18:17:07 2011 From: rdelsalle at gmail.com (Roch Delsalle) Date: Wed, 2 Mar 2011 18:17:07 +0100 Subject: Varnish & Multibrowser Message-ID: Hi, I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. (eg. if a div is hidden for Internet Explorer users) Will it cache it randomly or is will it be able to notice the difference ? Thanks, -- D-Ro.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Mar 2 18:26:05 2011 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Wed, 2 Mar 2011 09:26:05 -0800 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: <2FC7B8E9-2D6C-4B0E-BF8F-E32A8840F68B@develooper.com> On Mar 2, 2011, at 9:17, Roch Delsalle wrote: > I would like to know how Varnish would behave if a web page is different depending on the browser accessing it. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference ? You have to add a token to the cache key based on "was this MSIE". (Or have the developers do it with CSS or JS instead ...) - ask From scaunter at topscms.com Wed Mar 2 19:55:38 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 2 Mar 2011 13:55:38 -0500 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Hello: You do not have return(lookup); in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. We (have to) assume this is a 64bit OS. -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Romain LE DISEZ Sent: March-02-11 11:46 AM To: varnish-misc at varnish-cache.org Subject: Varnish burns the CPU and eat the RAM Hello all, I'm pretty new to Varnish. I'm deploying it because one of our customer is going to have a special event and the website is pretty slow (I'm working for an Internet hosting company). We are expecting more than 1000 requests per seconds. From what I read here and there, this should not be a problem for Varnish. My problem is that when Varnish is using cache ("deliver", as opposed to "pass"), the CPU consumption increases drasticaly, also the RAM. The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. With a simple test like this (robots.txt = 300 bytes) : ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt CPU consumption is fluctuating between 120% and 160%. Second point is that Varnish consumes all the memory. Trying to limit that, I made a tmpfs mountpoint of 3G : mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ But varnish continues to consume all the memory My configuration is attached to this mail. Varnish is launched like this : /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 120,120,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/mnt/varnish/varnish_storage.bin,100% -p thread_pools 4 I also tried to launch it with parameter "-h classic" It is installed on a Centos 5 up to date, with lastest RPMs provided by the varnish repository. If I put a return (pass) in vcl_fetch, everything is fine (except the backend server, of course). It makes me think, with my little knowledges of Varnish, that the problem is in the delivering from cache. Output of "varnishstat -1", when running ab, is attached. Thanks for your help. -- Romain LE DISEZ From nkinkade at creativecommons.org Thu Mar 3 01:40:34 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 19:40:34 -0500 Subject: return(lookup) in vcl_recv() to cache requests with cookies? Message-ID: This seems like one of those perennial questions where the required reply is RTFM or "review the list archives because it's been asked thousands of times", but for whatever reason, I can't find an answer to this aspect of caching requests with cookies. In the examples section of the 2.1 VCL reference (we're running 2.1.5) there is an example for how to force Varnish to cache requests that have cookies: http://www.varnish-cache.org/docs/2.1/reference/vcl.html#examples The instruction is to to return(lookup) in vcl_recv. However, I have found that that doesn't work for me. The only way I can seem to get Varnish 2.1.5 to cache a request with a cookie is to remove the Cookie: header in vcl_recv. Other docs I found also seem to indicate that return(lookup) should work. For example: http://www.varnish-cache.org/trac/wiki/VCLExampleCacheCookies#Cachingbasedonfileextensions There are also loads of other examples on the 'net that indicate that return(lookup) in vcl_recv should work. I though maybe it was cache control headers returned by the backend causing it not to cache, but I tried stripping all those out and it still didn't cache. Am I just missing something here, or is the documentation not fully complete? I don't necessarily want to strip cookies. I just want to cache, or not, based on some regular expression matching the Cookie: header sent by the client. Thanks, Nathan From cosimo at streppone.it Thu Mar 3 02:22:52 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 03 Mar 2011 12:22:52 +1100 Subject: Varnish & Multibrowser In-Reply-To: References: Message-ID: On Thu, 03 Mar 2011 04:17:07 +1100, Roch Delsalle wrote: > Hi, > > I would like to know how Varnish would behave if a web page is different > depending on the browser accessing it. Varnish doesn't know that unless you instruct it to. > (eg. if a div is hidden for Internet Explorer users) > Will it cache it randomly or is will it be able to notice the difference It will cache regardless of the content of the page, but according to: 1) vcl_hash(), which defaults to URL + cookies I believe 2) HTTP Vary header of the backend response So basically you have to tell Varnish what you want, and possibly stay consistent between VCL and how your designers make different pages for different browsers. I tried to put together an example based on what we use: http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizeUserAgent YMMV, -- Cosimo From jbooher at praxismicro.com Thu Mar 3 02:28:24 2011 From: jbooher at praxismicro.com (Jeff Booher) Date: Wed, 2 Mar 2011 20:28:24 -0500 Subject: Varnish Cache on multi account VPS Message-ID: I have 5 sites on the VPS. I want to only use Varnish on one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkinkade at creativecommons.org Thu Mar 3 02:59:10 2011 From: nkinkade at creativecommons.org (Nathan Kinkade) Date: Wed, 2 Mar 2011 20:59:10 -0500 Subject: Varnish Cache on multi account VPS In-Reply-To: References: Message-ID: This may not be the only, or even the best, way to go about this, but the thing that immediately occurs to me is to wrap your VCL rules for vcl_recv() in something like: sub vcl_recv { if ( req.http.host == "my.varnish.host" ) { [do something] } } Nathan On Wed, Mar 2, 2011 at 20:28, Jeff Booher wrote: > I have 5 sites on the VPS. I want to only use Varnish on one. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From andy at suburban-glory.com Thu Mar 3 09:28:24 2011 From: andy at suburban-glory.com (Andy Walpole) Date: Thu, 03 Mar 2011 08:28:24 +0000 Subject: 403 error message Message-ID: <4D6F5128.8020707@suburban-glory.com> Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole */Create and do what is new, through and through/* -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 10:13:45 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 10:13:45 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <7F0AA702B8A85A4A967C4C8EBAD6902C0105722F@TMG-EVS02.torstar.net> Message-ID: <1299143625.2628.41.camel@romain.v.netensia.net> Hello Stefan, thanks for your attention. Le mercredi 02 mars 2011 ? 13:55 -0500, Caunter, Stefan a ?crit : > You do not have > > return(lookup); > > in recv, not sure why, but that seems odd. Try it with that and see what happens with the test. As I understand, "return (lookup)" is automatically added because it is part of the default "vcl_recv", which is appended to the user vcl_recv. Nevertheless, I added it to the end of my "vcl_recv", it did not change the behaviour. > We (have to) assume this is a 64bit OS. You're right, it is a 64 bit CentOS. Greetings. -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From martin.boer at bizztravel.nl Thu Mar 3 11:01:36 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Thu, 03 Mar 2011 11:01:36 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299084382.2658.200.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> Message-ID: <4D6F6700.5020604@bizztravel.nl> Hello Romain, Wat does happen when you limit the amount of memory/space used ? Say something like -s file,/mnt/varnish/varnish_storage.bin,7G Regards, Martin On 03/02/2011 05:46 PM, Romain LE DISEZ wrote: > Hello all, > > I'm pretty new to Varnish. I'm deploying it because one of our customer > is going to have a special event and the website is pretty slow (I'm > working for an Internet hosting company). We are expecting more than > 1000 requests per seconds. > > From what I read here and there, this should not be a problem for > Varnish. > > My problem is that when Varnish is using cache ("deliver", as opposed to > "pass"), the CPU consumption increases drasticaly, also the RAM. > > The server is a Xeon QuadCore 2.5Ghz, 8GB of RAM. > > > With a simple test like this (robots.txt = 300 bytes) : > ab -r -n 1000000 -c 1000 http://www.customer1.com/robots.txt > > CPU consumption is fluctuating between 120% and 160%. > > Second point is that Varnish consumes all the memory. Trying to limit > that, I made a tmpfs mountpoint of 3G : > mount -t tmpfs -o size=3g tmpfs /mnt/varnish/ > > But varnish continues to consume all the memory > > My configuration is attached to this mail. Varnish is launched like > this : > /usr/sbin/varnishd -P /var/run/varnish.pid > -a :80 > -f /etc/varnish/default.vcl > -T 127.0.0.1:6082 > -t 120 > -w 120,120,120 > -u varnish -g varnish > -S /etc/varnish/secret > -s file,/mnt/varnish/varnish_storage.bin,100% > -p thread_pools 4 > > I also tried to launch it with parameter "-h classic" > > It is installed on a Centos 5 up to date, with lastest RPMs provided by > the varnish repository. > > If I put a return (pass) in vcl_fetch, everything is fine (except the > backend server, of course). It makes me think, with my little knowledges > of Varnish, that the problem is in the delivering from cache. > > Output of "varnishstat -1", when running ab, is attached. > > Thanks for your help. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Thu Mar 3 14:10:40 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Thu, 3 Mar 2011 13:10:40 +0000 Subject: 403 error message In-Reply-To: <4D6F5128.8020707@suburban-glory.com> References: <4D6F5128.8020707@suburban-glory.com> Message-ID: <1870656644209048894@unknownmsgid> On 3 Mar 2011, at 08:29, Andy Walpole wrote: Hi folks, I installed Varnish about a month ago but I've had a number of 403 error messages since (Service Unavailable Guru Meditation: XID: 583189221). It is only solved after a server reboot. I've no idea what is causing them. What is the best way of dissecting the problem? Is there an error file with Varnish? Regards, Andy -- ---------------------- Andy Walpole http://www.suburban-glory.com Work: 05601310400 (local rates) Mob: 07858756827 Skype: andy-walpole *Create and do what is new, through and through* _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc 403 points at your backend telling varnish it is forbidden. If varnish is giving you that error it is working and the backend is giving it 403. I've seen this before if backend apps use some sort of rate limiting per ip as by default when you add varnish to an existing setup the ip that gets passed to the backend is the varnish ip not the source ip. You could try passing the ip as x-forwarded-for Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From camcima at hotmail.com Thu Mar 3 15:33:28 2011 From: camcima at hotmail.com (Carlos Cima) Date: Thu, 3 Mar 2011 11:33:28 -0300 Subject: Grace Message-ID: Hi, Is there any way to check if a particular request was answered "in grace" by sending a HTTP header? I'm trying to increase the grace period if the user-agent contains "Googlebot" in order to speed up crawling response time and consequently be better positioned in Google organic search results. When I access using Googlebot in the user-agent header I'm not sure if Varnish is waiting for a backend request or not. VCL excerpt: sub vcl_recv { ... # Set Grace if (req.http.user-agent ~ "Googlebot") { set req.grace = 12h; } else { set req.grace = 30m; } ... } sub vcl_fetch { ... # Set Grace set beresp.grace = 12h; ... } Thanks, Carlos Cima -------------- next part -------------- An HTML attachment was scrubbed... URL: From shib4u at gmail.com Thu Mar 3 17:23:41 2011 From: shib4u at gmail.com (Shibashish) Date: Thu, 3 Mar 2011 21:53:41 +0530 Subject: Cache dynamic URLs Message-ID: Hi All, My "varnishtop -b -i TxURL" shows... 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 9.99 TxURL /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 9.99 TxURL /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 6.00 TxURL /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 5.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.99 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 4.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 3.99 TxURL /xyz.php?id=2182&status=1 3.00 TxURL /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 3.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 3.00 TxURL /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 3.00 TxURL /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 2.00 TxURL /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 2.00 TxURL /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 2.00 TxURL /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 2.00 TxURL /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 How can i cache those dynamic pages in Varnish, say for 30 sec ? Thanks. -- Shib -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhalfmoon at milksnot.com Thu Mar 3 17:29:04 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Thu, 03 Mar 2011 17:29:04 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <1870656644209048894@unknownmsgid> References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> Message-ID: <4D6FC1D0.7020508@milksnot.com> Hiya, today I upgraded a few Varnish servers from v2.1.3 to v2.1.5. The machines are purring along nicely, but I did notice something curious on in the server's statistics: the backend_toolate is down from a very wobbly average of 20p/s too a constant 0.7p/s. Also , the object & object head count are way down. n_lru_nuked is also down from an average of 10p/s to zero. Hitrate is unaffected and performance is slightly up (a few percent less cpuload on high-traffic moments). This is no temporary effect, because I've seen it on another machine in the same cluster, which I upgraded a week ago. I did a quick comparison between 2.1.3 and 2.1.5 of varnishadm's 'param.show' and also a quick scan of the sourcecode of 2.1.3 & 2.1.5, but couldn't find any parameter defaults that might have been changed between versions. It's not causing any issues here, other that a bit more performance. I'm just curious: Does anybody know what's going on here? Cheers, Johhny -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-backendtoolate.jpg Type: image/jpeg Size: 25299 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-215-headcount.jpg Type: image/jpeg Size: 19799 bytes Desc: not available URL: From isharov at yandex-team.ru Thu Mar 3 17:35:23 2011 From: isharov at yandex-team.ru (Iliya Sharov) Date: Thu, 03 Mar 2011 19:35:23 +0300 Subject: Cache dynamic URLs In-Reply-To: References: Message-ID: <4D6FC34B.5010209@yandex-team.ru> Hi. May be sub vcl_hash { set req.hash +=req.url; return(hash); } sub vcl_fetch { if (req.url ~ "(php") { set beresp.ttl =30s;} } and -p lru_interval=30 in command-line run options? Wbr, Iliya Sharov 03.03.2011 19:23, Shibashish ?????: > Hi All, > > My "varnishtop -b -i TxURL" shows... > > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=1 > 9.99 TxURL > /abc.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 9.99 TxURL > /xyz.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 6.00 TxURL > /aabb.php?id=2183&status=2&bt1=686&bt2=3285&pl1=1051&pl2=504&stat=2 > 5.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.99 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=1 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /podsaabb.php?id=2183&status=2&bt1=3285&bt2=1433&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 4.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=1 > 3.99 TxURL /xyz.php?id=2182&status=1 > 3.00 TxURL > /aabb.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2182&status=1&bt1=1412&bt2=0&pl1=318&pl2=6667&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=7852&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=142&pl2=1053&stat=1 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 3.00 TxURL > /xyz.php?id=2183&status=2&bt1=1433&bt2=3285&pl1=504&pl2=142&stat=2 > 3.00 TxURL > /abc.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=0&pl1=1053&pl2=360&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=1 > 2.00 TxURL > /aabb.php?id=2183&status=2&bt1=7852&bt2=0&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=1 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=7852&pl1=142&pl2=1053&stat=1 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=3285&bt2=686&pl1=504&pl2=142&stat=2 > 2.00 TxURL > /abc.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=142&pl2=1051&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7852&bt2=7676&pl1=142&pl2=1053&stat=2 > 2.00 TxURL > /xyz.php?id=2183&status=2&bt1=7676&bt2=1432&pl1=360&pl2=1053&stat=2 > > How can i cache those dynamic pages in Varnish, say for 30 sec ? > > Thanks. > > -- > Shib > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.ledisez at netensia.fr Thu Mar 3 17:57:11 2011 From: romain.ledisez at netensia.fr (Romain LE DISEZ) Date: Thu, 03 Mar 2011 17:57:11 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <4D6F6700.5020604@bizztravel.nl> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> Message-ID: <1299171431.2628.47.camel@romain.v.netensia.net> Hello Martin, Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : > Wat does happen when you limit the amount of memory/space used ? Say > something like > > -s file,/mnt/varnish/varnish_storage.bin,7G I did that : # free -m total used free Mem: 7964 156 7807 -/+ buffers/cache: 156 7807 # /etc/init.d/varnish start Starting varnish HTTP accelerator: [ OK ] # free -m total used free Mem: 7964 5044 2920 -/+ buffers/cache: 5044 2920 # ps uax | grep varnish /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 Even with a limit of 1G, it consumes 5G of RAM. Could it be related to the number of thread ? -- Romain LE DISEZ -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3354 bytes Desc: not available URL: From thebog at gmail.com Thu Mar 3 18:11:10 2011 From: thebog at gmail.com (thebog) Date: Thu, 3 Mar 2011 18:11:10 +0100 Subject: Varnish burns the CPU and eat the RAM In-Reply-To: <1299171431.2628.47.camel@romain.v.netensia.net> References: <1299084382.2658.200.camel@romain.v.netensia.net> <4D6F6700.5020604@bizztravel.nl> <1299171431.2628.47.camel@romain.v.netensia.net> Message-ID: The storage you are assigning is the storage for objects, not memoryspace. When it comes to how much memory Varnish uses, it's assigned by the OS. There is a big difference of how much is uses and how much it's assigned by the OS (normally). Use the top command and read the difference between whats actually used and how much is reserved. Read: http://www.varnish-cache.org/docs/2.1/faq/general.html for more info around that. In short, Varnish is using modern OS technics to find the "right" balance, and therefore memory should not be an issue. The burning of CPU is not correct, but I don't have any good pointers there. Join the irc channel, and ask if someone there can help you out. YS Anders Berg On Thu, Mar 3, 2011 at 5:57 PM, Romain LE DISEZ wrote: > Hello Martin, > > Le jeudi 03 mars 2011 ? 11:01 +0100, Martin Boer a ?crit : >> Wat does happen when you limit the amount of memory/space used ? Say >> something like >> >> -s file,/mnt/varnish/varnish_storage.bin,7G > > I did that : > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? ?156 ? ? ? 7807 > -/+ buffers/cache: ? ? ? ?156 ? ? ? 7807 > > # /etc/init.d/varnish start > Starting varnish HTTP accelerator: ? ? ? ? ? ? ? ? ? ? ? ? [ ?OK ?] > > # free -m > ? ? ? ? ? ? total ? ? ? used ? ? ? free > Mem: ? ? ? ? ?7964 ? ? ? 5044 ? ? ? 2920 > -/+ buffers/cache: ? ? ? 5044 ? ? ? 2920 > > # ps uax | grep varnish > /usr/sbin/varnishd [...] -s file,/mnt/varnish/varnish_storage.bin,1G -p thread_pools 4 > > Even with a limit of 1G, it consumes 5G of RAM. Could it be related to > the number of thread ? > > > -- > Romain LE DISEZ > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ruben.ortiz at itnet.es Thu Mar 3 16:18:57 2011 From: ruben.ortiz at itnet.es (=?iso-8859-1?Q?Rub=E9n_Ortiz?=) Date: Thu, 3 Mar 2011 16:18:57 +0100 Subject: Varnish Thread_Pool_Max. How to increase? Message-ID: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Hello Firstable my Varnish version varnishd (varnish-2.0.4) Linux Kernel 2.6.18-028stab070.14 I'm really new to Varnish. I want to configure it in my way, tunning some parameters but I don't know how. I have this setup: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -w 100,2000 \ -s file,/var/lib/varnish/varnish_storage.bin,2G" Theorically, with -w 100,2000 I'm telling to Varnish to increase its defaults values for thread_pool_min,thread_pool_max and yes, when I check stats with param.show I can see the changes. Previously, I have reboted varnish daemon. But How can I increase thread_pool_max? I was able to change in admin console, but when I reboot service, the param backs to its default config (2) Thanks people! Rub?n Ortiz Administrador de Sistemas Edificio Nova Gran Via Av. Gran V?a, 16-20, 2? planta | 08902 L'Hospitalet de Llobregat (Barcelona) T 902 999 343 | F 902 999 341 www.grupoitnet.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: grupo-itnet.jpg Type: image/jpeg Size: 7808 bytes Desc: not available URL: From liulu2 at leadsec.com.cn Fri Mar 4 03:14:01 2011 From: liulu2 at leadsec.com.cn (=?gb2312?B?wfXCtg==?=) Date: Fri, 4 Mar 2011 10:14:01 +0800 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 Message-ID: <201103041014015137100@leadsec.com.cn> jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2011-03-04 liulu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 07:08:36 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 01:08:36 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Message-ID: Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 12408 8.19 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5549280 3662.89 Client requests received cache_hit 5543904 3659.34 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 5376 3.55 Cache misses backend_conn 5373 3.55 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 3 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 5373 3.55 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 798 . N struct sess_mem n_sess 548 . N struct sess n_object 5373 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 5806 . N struct objectcore n_objecthead 5806 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 0.53 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 0 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 74099 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5543132 3658.83 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 12407 8.19 Total Sessions s_req 5549280 3662.89 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 5373 3.55 Total fetch s_hdrbytes 1245394845 822042.80 Total header bytes s_bodybytes 13448598673 8876962.82 Total body bytes sess_closed 43 0.03 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 5549262 3662.88 Session Linger sess_herd 1431702 945.02 Session herd shm_records 162564756 107303.47 SHM records shm_writes 7031015 4640.93 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 138344 91.32 SHM MTX contention shm_cycles 60 0.04 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 10746 7.09 SMA allocator requests sma_nobj 10746 . SMA outstanding allocations sma_nbytes 17538168 . SMA outstanding bytes sma_balloc 17538168 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 3 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 1254 . SMS bytes allocated sms_bfree 1254 . SMS bytes freed backend_req 5373 3.55 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 5540904 3657.36 HCB Lookups without lock hcb_lock 5375 3.55 HCB Lookups with lock hcb_insert 5373 3.55 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 1515 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) From tfheen at varnish-software.com Fri Mar 4 08:33:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:33:56 +0100 Subject: backend_toolate count suddenly drops after upgrade from 2.1.3 too 2.1.5 In-Reply-To: <4D6FC1D0.7020508@milksnot.com> (Johnny Halfmoon's message of "Thu, 03 Mar 2011 17:29:04 +0100") References: <4D6F5128.8020707@suburban-glory.com> <1870656644209048894@unknownmsgid> <4D6FC1D0.7020508@milksnot.com> Message-ID: <87d3m7e3vf.fsf@qurzaw.varnish-software.com> ]] Johnny Halfmoon | It's not causing any issues here, other that a bit more | performance. I'm just curious: Does anybody know what's going on here? It could be the automatic retry of requests when the backend closes the connection at us. See commits 19966c023f3bba30c32187a0c432c1711ac25201 and f7a5d684ef8fa5352f5fe6f5a28f6fe45f72deb1 regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From perbu at varnish-software.com Fri Mar 4 08:48:05 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:48:05 +0100 Subject: Cache dynamic URLs In-Reply-To: <4D6FC34B.5010209@yandex-team.ru> References: <4D6FC34B.5010209@yandex-team.ru> Message-ID: On Thu, Mar 3, 2011 at 5:35 PM, Iliya Sharov wrote: > Hi. > May be > sub vcl_hash > { > set req.hash > +=req.url; > > > return(hash); > } > This part isn't necessary. > > sub vcl_fetch > { > if (req.url ~ "(php") { set beresp.ttl =30s;} > } > It will work. > and -p lru_interval=30 in command-line run options? > This is also not relevant. I wouldn't recommend screwing around with parameters unless it is called for and you're know what you're doing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 4 08:50:37 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Mar 2011 08:50:37 +0100 Subject: Varnish Thread_Pool_Max. How to increase? In-Reply-To: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> References: <1DA6C799BA22444B8097EAA2C0B4A8E0@rystem01> Message-ID: 2011/3/3 Rub?n Ortiz > But How can I increase thread_pool_max? I was able to change in admin > console, but when I reboot service, the param backs to its default config > (2) > Take a look at the init script. It will probably source /etc/sysconfig/varnish or /etc/default/varnish and you can set the startup parameters there. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 4 08:56:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 04 Mar 2011 08:56:24 +0100 Subject: [bug]varnish-2.1.5 run fail in linux-2.4.20 In-Reply-To: <201103041014015137100@leadsec.com.cn> (=?utf-8?B?IuWImA==?= =?utf-8?B?6ZyyIidz?= message of "Fri, 4 Mar 2011 10:14:01 +0800") References: <201103041014015137100@leadsec.com.cn> Message-ID: <878vwve2tz.fsf@qurzaw.varnish-software.com> ]] "??" Hi, | jemalloc_linux.c line:5171 pthread_atfork(_malloc_prefork, _malloc_postfork, _malloc_postfork) produce of deadlock. 2.4 is quite old, 2.4.20 was released in 2002 so you should upgrade to something newer. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From steve.webster at lovefilm.com Fri Mar 4 11:14:18 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 10:14:18 +0000 Subject: Processing ESIs in parallel Message-ID: Hi, We've been looking at using Varnish 2.1.5 with ESIs to allow us to cache the bulk of our page content whilst still generating the user-specific sections dynamically. The sticking point for us is that some of these page sections cannot be cached. It seems, based on both observed behaviour and a quick look at the code for ESI_Deliver, that Varnish is processing and requesting content for the ESIs serially rather than in parallel. I know there has been a lot of work on ESIs for Varnish 3, but as far as I can tell they are still processed serially. Are there any plans to switch to a parallel processing model? If not, might this be a worthy feature request for a future version of Varnish? Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksorensen at nordija.com Fri Mar 4 11:56:13 2011 From: ksorensen at nordija.com (Kristian =?ISO-8859-1?Q?Gr=F8nfeldt_S=F8rensen?=) Date: Fri, 04 Mar 2011 11:56:13 +0100 Subject: Processing ESIs in parallel In-Reply-To: References: Message-ID: <1299236173.21671.17.camel@kriller.nordija.dk> On Fri, 2011-03-04 at 10:14 +0000, Steve Webster wrote: > Hi, > > We've been looking at using Varnish 2.1.5 with ESIs to allow us to > cache the bulk of our page content whilst still generating the > user-specific sections dynamically. The sticking point for us is that > some of these page sections cannot be cached. It seems, based on both > observed behaviour and a quick look at the code for ESI_Deliver, that > Varnish is processing and requesting content for the ESIs serially > rather than in parallel. I would like to see that feature in varnish as well. In our case we are including up to several hundred objects from a single document, and due to the nature of our data, chances are that if the first included ESI-object is a miss, then most of the remaining ESI-objects will be misses, so it would be great to be able to request some of the objects in parallel to speed up delivery. Regards Kristian S?rensen From phk at phk.freebsd.dk Fri Mar 4 12:07:24 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 04 Mar 2011 11:07:24 +0000 Subject: Processing ESIs in parallel In-Reply-To: Your message of "Fri, 04 Mar 2011 10:14:18 GMT." Message-ID: <61061.1299236844@critter.freebsd.dk> In message , Steve Webster w rites: >I know there has been a lot of work on ESIs for Varnish 3, but as far as I >can tell they are still processed serially. Are there any plans to switch to >a parallel processing model? If not, might this be a worthy feature request >for a future version of Varnish?s I wouldn't call them "plans", but it is on our wish-list. It is not simple though, so don't hold your breath. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From pom at dmsp.de Fri Mar 4 12:34:45 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Fri, 04 Mar 2011 12:34:45 +0100 Subject: varnishreplay question Message-ID: <4D70CE55.20306@dmsp.de> Hi all, I try to use varnishreplay for the first time. What I did is the following: - run varnishlog without any parameters and produce some quite big logfile on a production varnish - copy the log file (from varnishlog) from production to a testing machine (running varnish of course) - fiddle around with vcl to direct traffic to some standby backends - call varnishreplay 'varnishreplay -D -a localhost:80 -r ' Unfortunately a varnishstat on this testing machine does not show me any activity and my only output on console is: 0x7f3d703b5700 thread 0x7f3d704d4710:1701999465 started 0x7f3d703b5700 thread 0x7f3d703b3710:540291889 started 0x7f3d703b5700 thread 0x7f3d703ab710:678913378 started 0x7f3d703b5700 thread 0x7f3d703a3710:540357940 started 0x7f3d703b5700 thread 0x7f3d7039b710:540161076 started 0x7f3d703b5700 thread 0x7f3d70393710:540292660 started 0x7f3d703b5700 thread 0x7f3d7038b710:540292149 started 0x7f3d703b5700 thread 0x7f3d70383710:1701014383 started 0x7f3d703b5700 thread 0x7f3d7037b710:540292919 started 0x7f3d703b5700 thread 0x7f3d70373710:540162097 started 0x7f3d703b5700 thread 0x7f3d7036b710:825110816 started 0x7f3d703b5700 thread 0x7f3d6f28e710:1852796537 started 0x7f3d703b5700 thread 0x7f3d6f286710:540162100 started 0x7f3d703b5700 thread 0x7f3d6f27e710:540095033 started 0x7f3d703b5700 thread 0x7f3d6f276710:540292147 started 0x7f3d703b5700 thread 0x7f3d6f26e710:540094774 started 0x7f3d703b5700 thread 0x7f3d6f266710:540487985 started 0x7f3d703b5700 thread 0x7f3d6f25e710:1107959840 started 0x7f3d703b5700 thread 0x7f3d6f256710:540423988 started 0x7f3d703b5700 thread 0x7f3d6f24e710:540357431 started 0x7f3d703b5700 thread 0x7f3d6f246710:540488244 started 0x7f3d703b5700 thread 0x7f3d6f23e710:540356662 started 0x7f3d703b5700 thread 0x7f3d6f236710:540488756 started 0x7f3d703b5700 thread 0x7f3d6f22e710:540160820 started 0x7f3d703b5700 thread 0x7f3d6f26e710 stopped 0x7f3d703b5700 thread 0x7f3d6f27e710 stopped 0x7f3d703b5700 thread 0x7f3d6f22e710 stopped 0x7f3d703b5700 thread 0x7f3d7039b710 stopped 0x7f3d703b5700 thread 0x7f3d70373710 stopped 0x7f3d703b5700 thread 0x7f3d6f286710 stopped 0x7f3d703b5700 thread 0x7f3d703b3710 stopped 0x7f3d703b5700 thread 0x7f3d6f276710 stopped 0x7f3d703b5700 thread 0x7f3d7038b710 stopped 0x7f3d703b5700 thread 0x7f3d70393710 stopped 0x7f3d703b5700 thread 0x7f3d7037b710 stopped 0x7f3d703b5700 thread 0x7f3d6f23e710 stopped 0x7f3d703b5700 thread 0x7f3d6f24e710 stopped 0x7f3d703b5700 thread 0x7f3d703a3710 stopped 0x7f3d703b5700 thread 0x7f3d6f256710 stopped 0x7f3d703b5700 thread 0x7f3d6f266710 stopped 0x7f3d703b5700 thread 0x7f3d6f246710 stopped 0x7f3d703b5700 thread 0x7f3d6f236710 stopped 0x7f3d703b5700 thread 0x7f3d703ab710 stopped 0x7f3d703b5700 thread 0x7f3d7036b710 stopped 0x7f3d703b5700 thread 0x7f3d6f25e710 stopped 0x7f3d703b5700 thread 0x7f3d70383710 stopped 0x7f3d703b5700 thread 0x7f3d704d4710 stopped 0x7f3d703b5700 thread 0x7f3d6f28e710 stopped Ehm, varnish on production machine is 2.1.3, on testing platform is 2.1.5. Well - I'm doing it wrong - I know... but, how to do it correctly? Any idea or hint? Thanks! Kind regards, Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From steve.webster at lovefilm.com Fri Mar 4 13:39:25 2011 From: steve.webster at lovefilm.com (Steve Webster) Date: Fri, 4 Mar 2011 12:39:25 +0000 Subject: Processing ESIs in parallel In-Reply-To: <61061.1299236844@critter.freebsd.dk> References: <61061.1299236844@critter.freebsd.dk> Message-ID: On 4 Mar 2011, at 11:07, Poul-Henning Kamp wrote: > In message , Steve Webster w > rites: > >> I know there has been a lot of work on ESIs for Varnish 3, but as far as I >> can tell they are still processed serially. Are there any plans to switch to >> a parallel processing model? If not, might this be a worthy feature request >> for a future version of Varnish?s > > I wouldn't call them "plans", but it is on our wish-list. This is good news. > It is not simple though, so don't hold your breath. Indeed. I had one of those "how hard could this be" moments and started trying to implement it myself, then realised I had opened a can of worms and decided to leave Varnish hacking to the experts. I have a workaround for now ? a custom Apache output filter that uses LWP::Parallel ? so thankfully breathe-holding isn't necessary. Cheers, Steve -- Steve Webster Web Architect LOVEFiLM ----------------------------------------------------------------------------------------------------------------------------------------- LOVEFiLM UK Limited is a company registered in England and Wales. Registered Number: 06528297. Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom. This e-mail is confidential to the ordinary user of the e-mail address to which it was addressed. If you have received it in error, please delete it from your system and notify the sender immediately. This email message has been delivered safely and archived online by Mimecast. For more information please visit http://www.mimecast.co.uk ----------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 15:43:42 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 09:43:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Check your ec2 network settings. OS and varnish settings look okay, your varnishstat shows that varnish is coasting along fine. It's not threads. You have 800 available, according to the varnishstat; it's running with 800 threads, handling 12,000+ connections, and there is no thread creation failure. Therefore it does not need to add threads. What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 18:47:10 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 14:47:10 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D71259E.1090305@gmail.com> # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } From nadahalli at gmail.com Fri Mar 4 19:22:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 13:22:58 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Mar 4 20:25:12 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 4 Mar 2011 14:25:12 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> There's no health check in the backend. Not sure what that does with a one hour grace. I set a short grace with if (req.backend.healthy) { set req.grace = 60s; } else { set req.grace = 4h; } You also don't appear to select a backend in recv. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:23 PM To: varnish-misc at varnish-cache.org Subject: Re: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan wrote: What does something like firebug show when you request during the load test? The delay may be anything from DNS to the ec2 network. The DNS requests are getting resolved super quick. I am unable to see any other network issues with EC2. I have a similar machine in the same data center running nginx which is doing similar loads, but with no caching requirement, and it's running fine. In my first post, I forgot to attach my VCL, which is a bit too minimal. Am I missing something obvious? ------ backend default0 { .host = "10.202.30.39"; .port = "8000"; } sub vcl_recv { unset req.http.Cookie; set req.grace = 3600s; set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ------------------------- Could there be some kind of TCP packet pileup that I am missing? -T Stefan Caunter Operations Torstar Digital m: (416) 561-4871 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tejaswi Nadahalli Sent: March-04-11 1:09 AM To: varnish-misc at varnish-cache.org Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse Hi Everyone, I am seeing a situation similar to : http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/0 05351.html (Connections Dropped Under Load) http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/ 005258.html (Hanging Connections) I have httperf loading a varnish cache with never-expire content. While the load is on, other browser/wget requests to the varnish server get delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be impacted. So, is it some kind of thread problem? In production, I see a similar situation with around 1000 req/second load. I am running varnishd with the following command line options (as per http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -t 31536000 I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. My network sysctl parameters are tuned according to: http://varnish-cache.org/trac/wiki/Performance fs.file-max = 360000 net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save = 1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 Any help would be greatly appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 20:30:00 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 14:30:00 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > There?s no health check in the backend. Not sure what that does with a one > hour grace. I set a short grace with > > > > if (req.backend.healthy) { > > set req.grace = 60s; > > } else { > > set req.grace = 4h; > > } > I am still to add health-checks, directors, etc. Will add them soon. But those make sense if the cache-primed performance is good. In my test, I am requesting URLs who I know are already in the cache. Varnishstat also shows that - there are no cache misses at all. > > > You also don?t appear to select a backend in recv. > The default backend seems to be getting picked up automatically. -T > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:23 PM > > *To:* varnish-misc at varnish-cache.org > *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan > wrote: > > > > What does something like firebug show when you request during the load > test? The delay may be anything from DNS to the ec2 network. > > > The DNS requests are getting resolved super quick. I am unable to see any > other network issues with EC2. I have a similar machine in the same data > center running nginx which is doing similar loads, but with no caching > requirement, and it's running fine. > > In my first post, I forgot to attach my VCL, which is a bit too minimal. Am > I missing something obvious? > > ------ > backend default0 { > .host = "10.202.30.39"; > .port = "8000"; > } > > sub vcl_recv { > unset req.http.Cookie; > set req.grace = 3600s; > set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); > } > > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > } > ------------------------- > > Could there be some kind of TCP packet pileup that I am missing? > > -T > > > > > Stefan Caunter > > Operations > > Torstar Digital > > m: (416) 561-4871 > > > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli > *Sent:* March-04-11 1:09 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed > Reponse > > > > Hi Everyone, > > I am seeing a situation similar to : > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) > > I have httperf loading a varnish cache with never-expire content. While the > load is on, other browser/wget requests to the varnish server get delayed to > 10+ seconds. Any ideas what could be happening? ssh doesn't seem to be > impacted. So, is it some kind of thread problem? > > In production, I see a similar situation with around 1000 req/second load. > > I am running varnishd with the following command line options (as per > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -t 31536000 > > I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. > > My network sysctl parameters are tuned according to: > http://varnish-cache.org/trac/wiki/Performance > fs.file-max = 360000 > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_rmem = 4096 87380 16777216 > net.ipv4.tcp_wmem = 4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save = 1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > > Any help would be greatly appreciated > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Fri Mar 4 21:19:34 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 15:19:34 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: Under loaded conditions (3 machines doing httperf separately), I did a separate wget on the side, and am attaching the TCPDUMP of that request. As you can see, there is a delay in the middle where varnish didn't respond immediately. If thread/hit-rate conditions are optimal, this delay should be minimal I thought. Any help would be appreciated. -T On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: > >> There?s no health check in the backend. Not sure what that does with a one >> hour grace. I set a short grace with >> >> >> >> if (req.backend.healthy) { >> >> set req.grace = 60s; >> >> } else { >> >> set req.grace = 4h; >> >> } >> > > I am still to add health-checks, directors, etc. Will add them soon. But > those make sense if the cache-primed performance is good. In my test, I am > requesting URLs who I know are already in the cache. Varnishstat also shows > that - there are no cache misses at all. > > >> >> >> You also don?t appear to select a backend in recv. >> > > The default backend seems to be getting picked up automatically. > > -T > > >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:23 PM >> >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >> wrote: >> >> >> >> What does something like firebug show when you request during the load >> test? The delay may be anything from DNS to the ec2 network. >> >> >> The DNS requests are getting resolved super quick. I am unable to see any >> other network issues with EC2. I have a similar machine in the same data >> center running nginx which is doing similar loads, but with no caching >> requirement, and it's running fine. >> >> In my first post, I forgot to attach my VCL, which is a bit too minimal. >> Am I missing something obvious? >> >> ------ >> backend default0 { >> .host = "10.202.30.39"; >> .port = "8000"; >> } >> >> sub vcl_recv { >> unset req.http.Cookie; >> set req.grace = 3600s; >> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >> } >> >> sub vcl_deliver { >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> } else { >> set resp.http.X-Cache = "MISS"; >> } >> } >> ------------------------- >> >> Could there be some kind of TCP packet pileup that I am missing? >> >> -T >> >> >> >> >> Stefan Caunter >> >> Operations >> >> Torstar Digital >> >> m: (416) 561-4871 >> >> >> >> >> >> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >> *Sent:* March-04-11 1:09 AM >> *To:* varnish-misc at varnish-cache.org >> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >> Reponse >> >> >> >> Hi Everyone, >> >> I am seeing a situation similar to : >> >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >> >> I have httperf loading a varnish cache with never-expire content. While >> the load is on, other browser/wget requests to the varnish server get >> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >> to be impacted. So, is it some kind of thread problem? >> >> In production, I see a similar situation with around 1000 req/second load. >> >> >> I am running varnishd with the following command line options (as per >> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -t 31536000 >> >> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >> >> My network sysctl parameters are tuned according to: >> http://varnish-cache.org/trac/wiki/Performance >> fs.file-max = 360000 >> net.ipv4.ip_local_port_range = 1024 65536 >> net.core.rmem_max = 16777216 >> net.core.wmem_max = 16777216 >> net.ipv4.tcp_rmem = 4096 87380 16777216 >> net.ipv4.tcp_wmem = 4096 65536 16777216 >> net.ipv4.tcp_fin_timeout = 3 >> net.core.netdev_max_backlog = 30000 >> net.ipv4.tcp_no_metrics_save = 1 >> net.core.somaxconn = 262144 >> net.ipv4.tcp_syncookies = 0 >> net.ipv4.tcp_max_orphans = 262144 >> net.ipv4.tcp_max_syn_backlog = 262144 >> net.ipv4.tcp_synack_retries = 2 >> net.ipv4.tcp_syn_retries = 2 >> >> >> Any help would be greatly appreciated >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 20:15:46.896200 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [S], seq 975218147, win 5840, options [mss 1460,sackOK,TS val 239507633 ecr 0,nop,wscale 6], length 0 20:15:46.896220 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [S.], seq 2642556500, ack 975218148, win 5792, options [mss 1460,sackOK,TS val 267323553 ecr 239507633,nop,wscale 9], length 0 20:15:46.932874 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 0 20:15:46.932900 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [P.], seq 1:341, ack 1, win 92, options [nop,nop,TS val 239507639 ecr 267323553], length 340 20:15:46.933404 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], ack 341, win 14, options [nop,nop,TS val 267323556 ecr 239507639], length 0 20:16:07.129730 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 1:2897, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 2896 20:16:07.129752 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 2897:4345, ack 341, win 14, options [nop,nop,TS val 267325576 ecr 239507639], length 1448 20:16:07.138422 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 1449, win 137, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138439 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [.], seq 4345:5793, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 1448 20:16:07.138446 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [P.], seq 5793:5998, ack 341, win 14, options [nop,nop,TS val 267325577 ecr 239512697], length 205 20:16:07.138450 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 2897, win 182, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.138456 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 4345, win 227, options [nop,nop,TS val 239512697 ecr 267325576], length 0 20:16:07.148340 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5793, win 273, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148350 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148353 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [F.], seq 341, ack 5998, win 318, options [nop,nop,TS val 239512699 ecr 267325577], length 0 20:16:07.148441 IP 10.202.30.39.80 > 208.64.111.126.7544: Flags [F.], seq 5998, ack 342, win 14, options [nop,nop,TS val 267325578 ecr 239512699], length 0 20:16:07.156951 IP 208.64.111.126.7544 > 10.202.30.39.80: Flags [.], ack 5999, win 318, options [nop,nop,TS val 239512702 ecr 267325578], length 0 From nadahalli at gmail.com Fri Mar 4 22:01:42 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 16:01:42 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: According to http://www.spinics.net/lists/linux-net/msg17545.html - it might be due to "Overflowing the listen() command's incoming connection backlog." I simulated my load again, and here're the listen status before and during the test. Before: 3689345 times the listen queue of a socket overflowed 3689345 SYNs to LISTEN sockets dropped During: 3690354 times the listen queue of a socket overflowed 3690354 SYNs to LISTEN sockets dropped My net.core.somaxconn = 262144, which is pretty high. So, I cannot see what else I can do to increase the backlog's length. Is the only way to add more Varnish servers and load balance them behind Nginx or some such? -T On Fri, Mar 4, 2011 at 3:19 PM, Tejaswi Nadahalli wrote: > Under loaded conditions (3 machines doing httperf separately), I did a > separate wget on the side, and am attaching the TCPDUMP of that request. As > you can see, there is a delay in the middle where varnish didn't respond > immediately. If thread/hit-rate conditions are optimal, this delay should be > minimal I thought. > > Any help would be appreciated. > > -T > > > On Fri, Mar 4, 2011 at 2:30 PM, Tejaswi Nadahalli wrote: > >> On Fri, Mar 4, 2011 at 2:25 PM, Caunter, Stefan wrote: >> >>> There?s no health check in the backend. Not sure what that does with a >>> one hour grace. I set a short grace with >>> >>> >>> >>> if (req.backend.healthy) { >>> >>> set req.grace = 60s; >>> >>> } else { >>> >>> set req.grace = 4h; >>> >>> } >>> >> >> I am still to add health-checks, directors, etc. Will add them soon. But >> those make sense if the cache-primed performance is good. In my test, I am >> requesting URLs who I know are already in the cache. Varnishstat also shows >> that - there are no cache misses at all. >> >> >>> >>> >>> You also don?t appear to select a backend in recv. >>> >> >> The default backend seems to be getting picked up automatically. >> >> -T >> >> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:23 PM >>> >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Re: Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> On Fri, Mar 4, 2011 at 9:43 AM, Caunter, Stefan >>> wrote: >>> >>> >>> >>> What does something like firebug show when you request during the load >>> test? The delay may be anything from DNS to the ec2 network. >>> >>> >>> The DNS requests are getting resolved super quick. I am unable to see any >>> other network issues with EC2. I have a similar machine in the same data >>> center running nginx which is doing similar loads, but with no caching >>> requirement, and it's running fine. >>> >>> In my first post, I forgot to attach my VCL, which is a bit too minimal. >>> Am I missing something obvious? >>> >>> ------ >>> backend default0 { >>> .host = "10.202.30.39"; >>> .port = "8000"; >>> } >>> >>> sub vcl_recv { >>> unset req.http.Cookie; >>> set req.grace = 3600s; >>> set req.url = regsub(req.url, "&refurl=.*&t=.*&c=.*&r=.*", ""); >>> } >>> >>> sub vcl_deliver { >>> if (obj.hits > 0) { >>> set resp.http.X-Cache = "HIT"; >>> } else { >>> set resp.http.X-Cache = "MISS"; >>> } >>> } >>> ------------------------- >>> >>> Could there be some kind of TCP packet pileup that I am missing? >>> >>> -T >>> >>> >>> >>> >>> Stefan Caunter >>> >>> Operations >>> >>> Torstar Digital >>> >>> m: (416) 561-4871 >>> >>> >>> >>> >>> >>> *From:* varnish-misc-bounces at varnish-cache.org [mailto: >>> varnish-misc-bounces at varnish-cache.org] *On Behalf Of *Tejaswi Nadahalli >>> *Sent:* March-04-11 1:09 AM >>> *To:* varnish-misc at varnish-cache.org >>> *Subject:* Under Load: Server Unavailable/Connection Dropped/Delayed >>> Reponse >>> >>> >>> >>> Hi Everyone, >>> >>> I am seeing a situation similar to : >>> >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-January/005351.html(Connections Dropped Under Load) >>> >>> http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html(Hanging Connections) >>> >>> I have httperf loading a varnish cache with never-expire content. While >>> the load is on, other browser/wget requests to the varnish server get >>> delayed to 10+ seconds. Any ideas what could be happening? ssh doesn't seem >>> to be impacted. So, is it some kind of thread problem? >>> >>> In production, I see a similar situation with around 1000 req/second >>> load. >>> >>> I am running varnishd with the following command line options (as per >>> http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/): >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=8 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -t 31536000 >>> >>> I am on Ubuntu Lucid 64 bit Amazon EC2 C1.XLarge with 8 processing units. >>> >>> My network sysctl parameters are tuned according to: >>> http://varnish-cache.org/trac/wiki/Performance >>> fs.file-max = 360000 >>> net.ipv4.ip_local_port_range = 1024 65536 >>> net.core.rmem_max = 16777216 >>> net.core.wmem_max = 16777216 >>> net.ipv4.tcp_rmem = 4096 87380 16777216 >>> net.ipv4.tcp_wmem = 4096 65536 16777216 >>> net.ipv4.tcp_fin_timeout = 3 >>> net.core.netdev_max_backlog = 30000 >>> net.ipv4.tcp_no_metrics_save = 1 >>> net.core.somaxconn = 262144 >>> net.ipv4.tcp_syncookies = 0 >>> net.ipv4.tcp_max_orphans = 262144 >>> net.ipv4.tcp_max_syn_backlog = 262144 >>> net.ipv4.tcp_synack_retries = 2 >>> net.ipv4.tcp_syn_retries = 2 >>> >>> >>> Any help would be greatly appreciated >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drais at icantclick.org Fri Mar 4 23:48:31 2011 From: drais at icantclick.org (david raistrick) Date: Fri, 4 Mar 2011 17:48:31 -0500 (EST) Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > Is the only way to add more Varnish servers and load balance them behind > Nginx or some such? Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a limiting factor if all traffic only goes through that path. I haven't followed the rest of the thread to know where your real bottleneck is, but just keep that in mind. ;) Your next alternatives (this looks like you're @ AWS) would be ELB in front of varnish (which I do, but with mixed success), or a GSLB (dns based loadbalancing) service in the DNS adding an additional level of seperation. (we use akadns and I have lots of praises and no complaints yet. :) -- david raistrick http://www.netmeister.org/news/learn2quote.html drais at icantclick.org http://www.expita.com/nomime.html From nadahalli at gmail.com Sat Mar 5 01:39:56 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Fri, 4 Mar 2011 19:39:56 -0500 Subject: Under Load: Server Unavailable/Connection Dropped/Delayed Reponse In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902C0105759F@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902C01057709@TMG-EVS02.torstar.net> Message-ID: I added an Nginx server in front of the varnish cache, and things are swimming just fine now. Does it have something to do with accepting requests from different hosts? Where Nginx does better out of the box than Varnish does? -T On Fri, Mar 4, 2011 at 5:48 PM, david raistrick wrote: > On Fri, 4 Mar 2011, Tejaswi Nadahalli wrote: > > Is the only way to add more Varnish servers and load balance them behind >> Nginx or some such? >> > > Your loadbalancer (varnish, nginx, elb, haproxy, etc) will always be a > limiting factor if all traffic only goes through that path. > > I haven't followed the rest of the thread to know where your real > bottleneck is, but just keep that in mind. ;) > > Your next alternatives (this looks like you're @ AWS) would be ELB in front > of varnish (which I do, but with mixed success), or a GSLB (dns based > loadbalancing) service in the DNS adding an additional level of seperation. > (we use akadns and I have lots of praises and no complaints yet. :) > > > > > -- > david raistrick http://www.netmeister.org/news/learn2quote.html > drais at icantclick.org http://www.expita.com/nomime.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Sat Mar 5 10:48:20 2011 From: ronan at iol.ie (Ronan Mullally) Date: Sat, 5 Mar 2011 09:48:20 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Message-ID: Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 From brice at digome.com Sat Mar 5 20:52:54 2011 From: brice at digome.com (Brice Burgess) Date: Sat, 05 Mar 2011 13:52:54 -0600 Subject: varnishncsa & VirtualHost Message-ID: <4D729496.7010301@digome.com> I was previously running a SVN build of Varnish 2.1.4 which included fixes for timeouts with Content-Length. At the time there was no 2.1.5 debian package. I also applied the "-v virtualhost patch" [ticket 485] to varnishncsa to support virtualhost logging (as this is a multi-website webserver). Yesterday we updated to Debian Squeeze and I figured it a good time to switch back to official varnish-cache.org debs. We are now running varnish 2.1.5 but to my dismay I cannot get VirtualHost logging in varnishncsa? Apparently the logformat (-F) switch did not make it into this release?? This was a bad presumption. Are there any current solutions for getting virtualhost logging to work? Are there any unofficial .debs supporting the -F or -v options for varnishncsa? Many thanks, ~ Brice From mattias at nucleus.be Sun Mar 6 22:05:05 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Sun, 6 Mar 2011 22:05:05 +0100 Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Hi Ronan, Not sure if you've managed to test this yet, but Google seem to run with "Accept-Encoding: gzip". Perhaps there's a problem serving the compressed version, whereas your manual wget's don't use this accept-encoding? Regards, Mattias -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: zaterdag 5 maart 2011 10:48 To: varnish-misc at varnish-cache.org Subject: Varnish returning 503s for Googlebot requests (Bug #813?) Hi, I'm a varnish noob. I've only just started rolling out a cache in front of a VBulletin site running Apache that is currently using pound for load balancing. I'm running 2.1.5 on a debian lenny box. Testing is going well, apart from one problem. The site runs VBSEO to generate sitemap files. Without excpetion, every time Googlebot tries to request these files Varnish returns a 503: 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" I can request these files via wget direct from the backend as well as direct from varnish without a problem: --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache Content-Type: application/x-gzip Content-Length: 130283 Date: Sat, 05 Mar 2011 09:23:38 GMT X-Varnish: 1282440127 Age: 0 Via: 1.1 varnish Connection: keep-alive Length: 130283 (127K) [application/x-gzip] Saving to: `/dev/null' 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] I've reverted back to default.vcl, the only changes being to define my own backends. Varnishlog output is below. Having googled a bit the only thing I've found is bug #813, but that was apparently fixed prior to 2.1.5. Am I missing something obvious? -Ronan Varnishlog output 18 ReqStart c 66.249.66.246 63009 1282436348 18 RxRequest c GET 18 RxURL c /sitemap_362.xml.gz 18 RxProtocol c HTTP/1.1 18 RxHeader c Host: www.sitename.net 18 RxHeader c Connection: Keep-alive 18 RxHeader c Accept: */* 18 RxHeader c From: googlebot(at)googlebot.com 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 18 RxHeader c Accept-Encoding: gzip,deflate 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT 18 VCL_call c recv 18 VCL_return c lookup 18 VCL_call c hash 18 VCL_return c hash 18 VCL_call c miss 18 VCL_return c fetch 18 Backend c 40 sitename sitename1 40 TxRequest b GET 40 TxURL b /sitemap_362.xml.gz 40 TxProtocol b HTTP/1.1 40 TxHeader b Host: www.sitename.net 40 TxHeader b Accept: */* 40 TxHeader b From: googlebot(at)googlebot.com 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 40 TxHeader b Accept-Encoding: gzip,deflate 40 TxHeader b X-Forwarded-For: 66.249.66.246 40 TxHeader b X-Varnish: 1282436348 40 RxProtocol b HTTP/1.1 40 RxStatus b 200 40 RxResponse b OK 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT 40 RxHeader b Server: Apache 40 RxHeader b Content-Length: 130327 40 RxHeader b Content-Encoding: gzip 40 RxHeader b Vary: Accept-Encoding 40 RxHeader b Content-Type: application/x-gzip 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 18 VCL_call c fetch 18 VCL_return c deliver 18 ObjProtocol c HTTP/1.1 18 ObjStatus c 200 18 ObjResponse c OK 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT 18 ObjHeader c Server: Apache 18 ObjHeader c Content-Encoding: gzip 18 ObjHeader c Vary: Accept-Encoding 18 ObjHeader c Content-Type: application/x-gzip 18 FetchError c straight read_error: 0 40 Fetch_Body b 4 4294967295 1 40 BackendClose b sitename1 18 VCL_call c error 18 VCL_return c deliver 18 VCL_call c deliver 18 VCL_return c deliver 18 TxProtocol c HTTP/1.1 18 TxStatus c 503 18 TxResponse c Service Unavailable 18 TxHeader c Server: Varnish 18 TxHeader c Retry-After: 0 18 TxHeader c Content-Type: text/html; charset=utf-8 18 TxHeader c Content-Length: 419 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT 18 TxHeader c X-Varnish: 1282436348 18 TxHeader c Age: 1 18 TxHeader c Via: 1.1 varnish 18 TxHeader c Connection: close 18 Length c 419 18 ReqEnd c 1282436348 1299316657.660784483 1299316658.684726000 0.478523970 1.023897409 0.000044107 18 SessionClose c error 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Sun Mar 6 23:39:41 2011 From: straightflush at gmail.com (AD) Date: Sun, 6 Mar 2011 17:39:41 -0500 Subject: Lots of configs Message-ID: Hello, what is the best way to run an instance of varnish that may need different vcl configurations for each hostname. This could end up being 100-500 includes to map to each hostname and then a long if/then block based on the hostname. Is there a more scalable way to deal with this? We have been toying with running one large varnish instance with tons of includes or possibly running multiple instances of varnish (with the config broken up) or spreading the load across different clusters (kind of like sharding) based on hostname to keep the configuration simple. Any best practices here? Are there any notes on the performance impact of the size of the VCL or the amount of if/then statements in vcl_recv to process a unique call function ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelcrocha at gmail.com Fri Mar 4 17:50:33 2011 From: rafaelcrocha at gmail.com (rafael) Date: Fri, 04 Mar 2011 13:50:33 -0300 Subject: ESI include does not work until I reload page Message-ID: <4D711859.2010101@gmail.com> Hello everyone. I am using varnish to cache my Plone site, with xdv. I have the following configuration: nginx - varnish - nginx (apply xdv transf) - haproxy - plone. My problem is that the first time I open a page, my esi includes are not interpreted.. I get a blank content, and in firebug I can see the esi statement. (If I ask firefox to show me the source, it makes a new request, so the source displayed has the correct replacements). If I reload the page, or open it in a new tab everything works perfectly. The problem is only the first time a browser open the pages. If I close and reopen the browser, the first time the page is opened, the error appears again.. My varnish.vcl config: # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. backend backend_0 { .host = "127.0.0.1"; .port = "1010"; .connect_timeout = 0.4s; .first_byte_timeout = 300s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; } sub vcl_recv { set req.grace = 120s; set req.backend = backend_0; if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.If-None-Match) { pass; } if (req.url ~ "createObject") { pass; } remove req.http.Accept-Encoding; lookup; } sub vcl_pipe { # This is not necessary if you do not do any request rewriting. set req.http.connection = "close"; set bereq.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged"; } if (!obj.cacheable) { pass; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } sub vcl_fetch { set obj.grace = 120s; if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") { pass; } if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } if (obj.http.Content-Type ~ "text/html") { esi; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } } Thanks for all, Rafael From neltnerb at MIT.EDU Sat Mar 5 04:33:05 2011 From: neltnerb at MIT.EDU (Brian Neltner) Date: Fri, 04 Mar 2011 20:33:05 -0700 Subject: Hosting multiple virtualhosts in apache2 Message-ID: <1299295985.23065.22.camel@zeeman> Dear Varnish, I'll preface this with saying that I am not an IT person, and so although I think I sort of get the gist of how this all works, if I don't have fairly explicit instructions on how things work I get very confused. That said, I have a slicehost server hosting http://saikoled.com which has varnish as a frontend. Varnish listens on port 80, and apache2 listens on port 8080 for ServerName www.saikoled.com with ServerAliases for saikoled.com, saikoled.net, and www.saikoled.net. What I want to do is have the slice host a different website from the same IP address, microreactorsolutions.com. I *think* that I know how to set apache2 up with a virtualhost for this, and my thought was to tell it that the virtualhost should listen on port 8079 instead of 8080 (although maybe this isn't necessary). To try to do this, I looked at the documentation for Advanced Backend Documentation here (http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html). However, the application they're looking at here is sufficiently different from what I want to do (although frustratingly close), that I can't tell what to do. It seems that this is setup to have a subdirectory that matches the regexp "^/java/" go to the other port on the backend, which is all well and good, but this doesn't seem to be something that is likely to work with a totally different ServerName (after all, the ^ suggests pretty strongly that the matching doesn't begin until after the ServerName). I also saw in the "Health Checks" some stuff that looked like it did in fact do some stuff with actual ServerNames, but I really don't get how to tell Varnish where to pull requests on port 80 from which as far as I can see is done with regexps that don't handle what I'm looking for. Sorry if this is covered somewhere more obscure in the manual, but as I said, I'm really not particularly good with computers despite the mit email address (I do chemistry...), and trying to work through this entire manual in detail is going to drive me crazy. Best, Brian Neltner From david at firechaser.com Mon Mar 7 09:33:25 2011 From: david at firechaser.com (David Murphy) Date: Mon, 7 Mar 2011 08:33:25 +0000 Subject: Hosting multiple virtualhosts in apache2 In-Reply-To: <1299295985.23065.22.camel@zeeman> References: <1299295985.23065.22.camel@zeeman> Message-ID: Hi Brian Unless the second site is doing something unusual, I don't think you need worry about having its virtualhost listen on another port. Just have all of your websites configured to run on port 8080 and then any site-specific rules (such as which pages/assets can be cached) can be added to the VCL file. We have a server that has a Varnish front end and about 6 or 7 very different websites running under Apache (port 8080) on the backend, all with their own unique domain names. For the most part all sites share the same rules e.g. such as 'always cache images' and 'never cache .php' but a couple of sites need to be treated different e.g. 'do not cache anything in the /blah directory of site abc' and we add that rule to the VCL file. Best, David On Sat, Mar 5, 2011 at 3:33 AM, Brian Neltner wrote: > Dear Varnish, > > I'll preface this with saying that I am not an IT person, and so > although I think I sort of get the gist of how this all works, if I > don't have fairly explicit instructions on how things work I get very > confused. > > That said, I have a slicehost server hosting http://saikoled.com which > has varnish as a frontend. Varnish listens on port 80, and apache2 > listens on port 8080 for ServerName www.saikoled.com with ServerAliases > for saikoled.com, saikoled.net, and www.saikoled.net. > > What I want to do is have the slice host a different website from the > same IP address, microreactorsolutions.com. > > I *think* that I know how to set apache2 up with a virtualhost for this, > and my thought was to tell it that the virtualhost should listen on port > 8079 instead of 8080 (although maybe this isn't necessary). > > To try to do this, I looked at the documentation for Advanced Backend > Documentation here > ( > http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html > ). > > However, the application they're looking at here is sufficiently > different from what I want to do (although frustratingly close), that I > can't tell what to do. It seems that this is setup to have a > subdirectory that matches the regexp "^/java/" go to the other port on > the backend, which is all well and good, but this doesn't seem to be > something that is likely to work with a totally different ServerName > (after all, the ^ suggests pretty strongly that the matching doesn't > begin until after the ServerName). > > I also saw in the "Health Checks" some stuff that looked like it did in > fact do some stuff with actual ServerNames, but I really don't get how > to tell Varnish where to pull requests on port 80 from which as far as I > can see is done with regexps that don't handle what I'm looking for. > > Sorry if this is covered somewhere more obscure in the manual, but as I > said, I'm really not particularly good with computers despite the mit > email address (I do chemistry...), and trying to work through this > entire manual in detail is going to drive me crazy. > > Best, > Brian Neltner > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 14:02:27 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 08:02:27 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D74D763.706@sbgnet.com> The best way would be to use a jump table. By that, I mean to make multiple subroutines in C, and then to jump to the different subroutines by looking up pointers to the subroutines using a string hashing/lookup system. You would also need a flag to indicate whether the hash has been 'initialized' yet as well. The initialization would consist of storing function pointers at the hash locations corresponding to each of the domains. I attempted to do this myself when I first started using varnish, but I was having problems with varnish crashing when attempting to use the code I wrote in C. There may be limitations to the C code that can be used. On 3/6/2011 5:39 PM, AD wrote: > Hello, > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long if/then > block based on the hostname. Is there a more scalable way to deal > with this? We have been toying with running one large varnish > instance with tons of includes or possibly running multiple instances > of varnish (with the config broken up) or spreading the load across > different clusters (kind of like sharding) based on hostname to keep > the configuration simple. > > Any best practices here? Are there any notes on the performance > impact of the size of the VCL or the amount of if/then statements in > vcl_recv to process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 15:23:54 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 09:23:54 -0500 Subject: Lots of configs In-Reply-To: <4D74D763.706@sbgnet.com> References: <4D74D763.706@sbgnet.com> Message-ID: but dont all the configs need to be loaded at runtime, not sure the overhead here? I think what you mentioned seems like a really innovative way to "call" the function but what about anyimpact to "loading" all these configs? If i understand what you are saying, i put a "call test_func;" in vcl_recv which turned into this in C if (VGC_function_test_func(sp)) return (1); if Are you suggesting your hash_table would take over this step ? Adam On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to jump to the > different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at the hash > locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, but I was > having problems with varnish > crashing when attempting to use the code I wrote in C. There may be > limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: > > Hello, > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? We have been > toying with running one large varnish instance with tons of includes or > possibly running multiple instances of varnish (with the config broken up) > or spreading the load across different clusters (kind of like sharding) > based on hostname to keep the configuration simple. > > Any best practices here? Are there any notes on the performance impact > of the size of the VCL or the amount of if/then statements in vcl_recv to > process a unique call function ? > > Thanks > > > _______________________________________________ > varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 15:45:41 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 09:45:41 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> Message-ID: <4D74EF95.7090907@sbgnet.com> vcl configuration is turned straight into C first of all. You can put your own C code in both the functions and globally. When including headers/libraries, you essentially just have to include the code globally. I am not sure if there is any 'init' function when varnish is called, so I was suggesting that the hash be initiated by just checking if the hash has been created yet. This will cause a penalty to the first vcl_recv call that goes through; but that shouldn't matter. Note that I just passed a dummy number as an example to the custom config, and that I didn't show how to do anything in the custom function. In this example, all custom stuff would be in straight C. You would need to use varnish itself to compile what config you want and look at the C code it generates to figure out how to tie in all your custom configs.... eg: C{ #include "hash.c" // a string hashing store/lookup libary; you'll need to write one // or possibly just use some freely available one. hashc *hash=0; void init_hash() { if( hash ) return; hash.store( 'test.com', &test_com ); // same for all domains } void test_com( int n ) { // custom vcl_recv stuff for domain 'test' } } sub vcl_recv { C{ char *domain; // [ place some code to fetch domain and put it in domain here ] if( !hash ) init_hash(); void (*func)(int); func = hash.lookup( domain ); func(1); } } On 3/7/2011 9:23 AM, AD wrote: > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really > innovative way to "call" the function but what about anyimpact to > "loading" all these configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski > wrote: > > The best way would be to use a jump table. > By that, I mean to make multiple subroutines in C, and then to > jump to the different subroutines by looking > up pointers to the subroutines using a string hashing/lookup system. > > You would also need a flag to indicate whether the hash has been > 'initialized' yet as well. > The initialization would consist of storing function pointers at > the hash locations corresponding to each > of the domains. > > I attempted to do this myself when I first started using varnish, > but I was having problems with varnish > crashing when attempting to use the code I wrote in C. There may > be limitations to the C code that can be > used. > > > On 3/6/2011 5:39 PM, AD wrote: >> Hello, >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? We have been toying with running one >> large varnish instance with tons of includes or possibly running >> multiple instances of varnish (with the config broken up) or >> spreading the load across different clusters (kind of like >> sharding) based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance >> impact of the size of the VCL or the amount of if/then statements >> in vcl_recv to process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 15:58:08 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 14:58:08 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup Message-ID: I am looking into supporting Google?s ?First Click Free for Web Search?. I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Mon Mar 7 16:05:08 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Mon, 7 Mar 2011 16:05:08 +0100 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Hi, I would look at the user agent to verify if it's a GoogleBot or not, as that's more easily checked via VCL. All GoogleBots also adhere to the correct User-Agent. There really aren't that many users that spoof their User-Agent to gain extra access. Also keep in mind that serving GoogleBot different content than actual users will get you penalties in SEO, eventually dropping your Google ranking. Just, FYI. Regards, Mattias From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, Richard Sent: maandag 7 maart 2011 15:58 To: varnish-misc at varnish-cache.org Subject: Let GoogleBot Crawl full content, reverse DNS lookup I am looking into supporting Google's "First Click Free for Web Search". I need to allow the GoogleBots to index the full content of my sites but still maintain the Registration wall for everyone else. Google suggests that you detect there GoogleBots by reverse DNS lookup of the requesters IP. Google Desc: http://www.google.com/support/webmasters/bin/answer.py?answer=80553 Has anyone done DNS lookups via VCL to verify access to content or to cache content? System Desc: Varnish 2.1.4 RHEL 5-4 Apache 2.2x - Richard From richard.chiswell at mangahigh.com Mon Mar 7 16:08:03 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 07 Mar 2011 15:08:03 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: References: Message-ID: <4D74F4D3.6040008@mangahigh.com> On 07/03/2011 14:58, Lane, Richard wrote: > > I am looking into supporting Google?s ?First Click Free for Web > Search?. I need to allow the GoogleBots to index the full content of > my sites but still maintain the Registration wall for everyone else. > Google suggests that you detect there GoogleBots by reverse DNS lookup > of the requesters IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? I believe this /could/ be done using a C function, but it's not something I've had experience of before. What you could do is detect the Google user-agent in varnish, and then pass that and the IP to a backend script with the original request: such as /* Varnish 2.0.6 psuedo code - may need updating */ if (req.http.user-agent == "Googlebot") { set.http.x-varnish-originalurl = req.url; set req.url = "/googlecheck?ip= " client.ip "&originalurl=" req.url; lookup; } and the Googlecheck script actually does the rDNS look up and if it matches, it returns the contents of the requested url. Richard Chiswell http://www.mangahigh.com (Speaking personally yadda yadda) -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 16:30:22 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 10:30:22 -0500 Subject: Lots of configs In-Reply-To: <4D74EF95.7090907@sbgnet.com> References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: Cool, as for the startup, i wonder if you can instead of trying to insert into VCL_Init, try to do just, as part of the startup process hit a special URL to load the hash_Table. Or another possibility might be to load an external module, and in there, populate the hash. On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski wrote: > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to include the > code globally. > > I am not sure if there is any 'init' function when varnish is called, so I > was suggesting that > the hash be initiated by just checking if the hash has been created yet. > > This will cause a penalty to the first vcl_recv call that goes through; but > that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom config, > and that > I didn't show how to do anything in the custom function. In this example, > all custom > stuff would be in straight C. You would need to use varnish itself to > compile what config > you want and look at the C code it generates to figure out how to tie in > all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; you'll need to > write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > hash.store( 'test.com', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: > > but dont all the configs need to be loaded at runtime, not sure the > overhead here? I think what you mentioned seems like a really innovative > way to "call" the function but what about anyimpact to "loading" all these > configs? > > If i understand what you are saying, i put a "call test_func;" in > vcl_recv which turned into this in C > > if (VGC_function_test_func(sp)) > return (1); > if > > Are you suggesting your hash_table would take over this step ? > > Adam > > On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski wrote: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to the >> different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. >> >> You would also need a flag to indicate whether the hash has been >> 'initialized' yet as well. >> The initialization would consist of storing function pointers at the hash >> locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using varnish, but I >> was having problems with varnish >> crashing when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >> >> Hello, >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? We have >> been toying with running one large varnish instance with tons of includes or >> possibly running multiple instances of varnish (with the config broken up) >> or spreading the load across different clusters (kind of like sharding) >> based on hostname to keep the configuration simple. >> >> Any best practices here? Are there any notes on the performance impact >> of the size of the VCL or the amount of if/then statements in vcl_recv to >> process a unique call function ? >> >> Thanks >> >> >> _______________________________________________ >> varnish-misc mailing listvarnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 16:56:20 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 10:56:20 -0500 Subject: Lots of configs In-Reply-To: References: <4D74D763.706@sbgnet.com> <4D74EF95.7090907@sbgnet.com> Message-ID: <4D750024.1060400@sbgnet.com> It is true that there are potentially better places to setup the hash, but it is best to check for a null pointer for the hash object anyway any time you use it. The setup itself is also very fast; you just don't want to do it every time. Note in my init function I forgot a 'hash = new hashc()'. Also; if you are going to do this, you will likely have a preset list of domains you are using. In such a case, the best type of hash to use would be a 'minimal perfect hash'. You could use the 'gperf' library to generate a suitable algorithm to map your domain strings into an array. On 3/7/2011 10:30 AM, AD wrote: > Cool, as for the startup, i wonder if you can instead of trying to > insert into VCL_Init, try to do just, as part of the startup process > hit a special URL to load the hash_Table. Or another possibility > might be to load an external module, and in there, populate the hash. > > > > On Mon, Mar 7, 2011 at 9:45 AM, David Helkowski > wrote: > > vcl configuration is turned straight into C first of all. > You can put your own C code in both the functions and globally. > When including headers/libraries, you essentially just have to > include the code globally. > > I am not sure if there is any 'init' function when varnish is > called, so I was suggesting that > the hash be initiated by just checking if the hash has been > created yet. > > This will cause a penalty to the first vcl_recv call that goes > through; but that shouldn't > matter. > > Note that I just passed a dummy number as an example to the custom > config, and that > I didn't show how to do anything in the custom function. In this > example, all custom > stuff would be in straight C. You would need to use varnish itself > to compile what config > you want and look at the C code it generates to figure out how to > tie in all your custom > configs.... > > eg: > > C{ > #include "hash.c" // a string hashing store/lookup libary; > you'll need to write one > // or possibly just use some freely available one. > hashc *hash=0; > > void init_hash() { > if( hash ) return; > > hash.store( 'test.com ', &test_com ); > // same for all domains > } > > void test_com( int n ) { > // custom vcl_recv stuff for domain 'test' > } > } > > sub vcl_recv { > C{ > char *domain; > // [ place some code to fetch domain and put it in domain here ] > if( !hash ) init_hash(); > void (*func)(int); > func = hash.lookup( domain ); > func(1); > > } > } > > On 3/7/2011 9:23 AM, AD wrote: >> but dont all the configs need to be loaded at runtime, not sure >> the overhead here? I think what you mentioned seems like a >> really innovative way to "call" the function but what about >> anyimpact to "loading" all these configs? >> >> If i understand what you are saying, i put a "call test_func;" in >> vcl_recv which turned into this in C >> >> if (VGC_function_test_func(sp)) >> return (1); >> if >> >> Are you suggesting your hash_table would take over this step ? >> >> Adam >> >> On Mon, Mar 7, 2011 at 8:02 AM, David Helkowski >> > wrote: >> >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then >> to jump to the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup >> system. >> >> You would also need a flag to indicate whether the hash has >> been 'initialized' yet as well. >> The initialization would consist of storing function pointers >> at the hash locations corresponding to each >> of the domains. >> >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish >> crashing when attempting to use the code I wrote in C. There >> may be limitations to the C code that can be >> used. >> >> >> On 3/6/2011 5:39 PM, AD wrote: >>> Hello, >>> what is the best way to run an instance of varnish that may >>> need different vcl configurations for each hostname. This >>> could end up being 100-500 includes to map to each hostname >>> and then a long if/then block based on the hostname. Is >>> there a more scalable way to deal with this? We have been >>> toying with running one large varnish instance with tons of >>> includes or possibly running multiple instances of varnish >>> (with the config broken up) or spreading the load across >>> different clusters (kind of like sharding) based on hostname >>> to keep the configuration simple. >>> >>> Any best practices here? Are there any notes on the >>> performance impact of the size of the VCL or the amount of >>> if/then statements in vcl_recv to process a unique call >>> function ? >>> >>> Thanks >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Mon Mar 7 16:56:09 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 07:56:09 -0800 Subject: Weird "^" in the regex of varnish Message-ID: Hi Guys I encountered this issue in two different environment(env1 and env2). The sample code is like: in vcl_fetch() else if (req.url ~ "^/tables/\w{6}/summary.js") { if (req.http.Set-Cookie !~ " u=\w") { unset beresp.http.Set-Cookie; set beresp.ttl = 2h; set beresp.grace = 22h; return(deliver); } else { return(pass); } } In env1, the request like http://mytest.com/api/v2/tables/vyulrh/read.jsamlcan enter lookup and then enter fetch to create a new cache entry. Next time, the same request will hit cache and do not do fetch anymore In env2, the same request enter and go into vcl_fetch, the regex will fail and can not enter deliver, so the resp will be sent to end user without cache creating. I'm not sure if there is somebody has the same issue. Is it platform related ? R -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Mar 7 17:51:49 2011 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 7 Mar 2011 16:51:49 +0000 Subject: Let GoogleBot Crawl full content, reverse DNS lookup In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0CEB@nucleus-srv01.Nucleus.local> Message-ID: Mattias, I am aware of Google's policy about serving different content to search users, which is why I am have to implement their "First Click Free" program. I will use the User-Agent but need to go a step further and verify the crawler is who they say they are by DNS. Cheers, Richard On 3/7/11 9:05 AM, "Mattias Geniar" wrote: > Hi, > > I would look at the user agent to verify if it's a GoogleBot or not, as > that's more easily checked via VCL. All GoogleBots also adhere to the > correct User-Agent. > There really aren't that many users that spoof their User-Agent to gain > extra access. > > Also keep in mind that serving GoogleBot different content than actual > users will get you penalties in SEO, eventually dropping your Google > ranking. Just, FYI. > > Regards, > Mattias > > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Lane, > Richard > Sent: maandag 7 maart 2011 15:58 > To: varnish-misc at varnish-cache.org > Subject: Let GoogleBot Crawl full content, reverse DNS lookup > > > I am looking into supporting Google's "First Click Free for Web Search". > I need to allow the GoogleBots to index the full content of my sites but > still maintain the Registration wall for everyone else. Google suggests > that you detect there GoogleBots by reverse DNS lookup of the requesters > IP. > > Google Desc: > http://www.google.com/support/webmasters/bin/answer.py?answer=80553 > > Has anyone done DNS lookups via VCL to verify access to content or to > cache content? > > System Desc: > Varnish 2.1.4 > RHEL 5-4 > Apache 2.2x > > - Richard From perbu at varnish-software.com Mon Mar 7 19:35:36 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 19:35:36 +0100 Subject: Lots of configs In-Reply-To: References: Message-ID: Hi, On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > > what is the best way to run an instance of varnish that may need different > vcl configurations for each hostname. This could end up being 100-500 > includes to map to each hostname and then a long if/then block based on the > hostname. Is there a more scalable way to deal with this? > CPU and memory bandwidth is abundant on modern servers. I'm actually not sure that having a 500 entries long if/else statement will hamper performance at all. Remember, there will be no system calls. I would guess a modern server will execute at least a four million regex-based if/else per second per CPU core if most of the code and data will be in the on die cache. So executing 500 matches should take about 0.5ms. It might not make sense to optimize this. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Mon Mar 7 19:52:22 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Mon, 07 Mar 2011 13:52:22 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: <4D752966.7000203@sbgnet.com> A modern CPU can run, at most, around 10 million -assembly based- instructions per second. See http://en.wikipedia.org/wiki/Instructions_per_second A regular expression compare is likely at least 20 or so assembly instructions. That gives around 500,000 regular expression compares if you are using 100% of the CPU just for that. A reasonable amount of CPU to consume would be 30% ( at most ). So; you are left with around 150k regular expression checks per second. Lets suppose there are 500 different domains. On average, you will be doing 250 if/else checks per call. 150k / 250 = 600. That means that you will get, under fair conditions, a max of about 600 hits per second. The person asking the question likely has 500 domains running. That gives a little over 1 hit possible per second per domain. Do you think that is an acceptable solution for this person? I think not. Compare it to a hash lookup. A hash lookup, using a good minimal perfect hashing algorithms, will take at most around 10 operations. Using the same math as above, that gives around 300k lookups per second. A hash would be roughly 500 times faster than using if/else... On 3/7/2011 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD > wrote: > > > what is the best way to run an instance of varnish that may need > different vcl configurations for each hostname. This could end up > being 100-500 includes to map to each hostname and then a long > if/then block based on the hostname. Is there a more scalable way > to deal with this? > > > CPU and memory bandwidth is abundant on modern servers. I'm actually > not sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would > guess a modern server will execute at least a four million regex-based > if/else per second per CPU core if most of the code and data will be > in the on die cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Mar 7 19:55:09 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 13:55:09 -0500 Subject: Lots of configs In-Reply-To: References: Message-ID: Thanks Per. I guess the other part of this was to make the config more scalable so we are not constantly adding if/else blocks. Would by nice to have a way to just do something like call(custom_ + req.hostname) On Mon, Mar 7, 2011 at 1:35 PM, Per Buer wrote: > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Mon Mar 7 20:45:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Mon, 7 Mar 2011 19:45:43 +0000 (GMT) Subject: Varnish returning 503s for Googlebot requests (Bug #813?) In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> References: <18834F5BEC10824891FB8B22AC821A5A013D0C98@nucleus-srv01.Nucleus.local> Message-ID: Hi Mattias, On Sun, 6 Mar 2011, Mattias Geniar wrote: > Not sure if you've managed to test this yet, but Google seem to run with > "Accept-Encoding: gzip". Perhaps there's a problem serving the > compressed version, whereas your manual wget's don't use this > accept-encoding? You're spot on. Adding an Accept-Encoding header to my wget requests resulted in failures. The content length reported being longer than that actually retrieved. I tracked the fault down to PHP doing compression via zlib.compression. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: zaterdag 5 maart 2011 10:48 > To: varnish-misc at varnish-cache.org > Subject: Varnish returning 503s for Googlebot requests (Bug #813?) > > Hi, > > I'm a varnish noob. I've only just started rolling out a cache in front > of a VBulletin site running Apache that is currently using pound for > load > balancing. > > I'm running 2.1.5 on a debian lenny box. Testing is going well, apart > from one problem. The site runs VBSEO to generate sitemap files. > Without excpetion, every time Googlebot tries to request these files > Varnish returns a 503: > > 66.249.66.246 - - [05/Mar/2011:09:33:53 +0000] "GET > http://www.sitename.net/sitemap_151.xml.gz HTTP/1.1" 503 419 "-" > "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" > > I can request these files via wget direct from the backend as well as > direct from varnish without a problem: > > --2011-03-05 09:23:39-- http://www.sitename.net/sitemap_362.xml.gz > > HTTP request sent, awaiting response... > HTTP/1.1 200 OK > Server: Apache > Content-Type: application/x-gzip > Content-Length: 130283 > Date: Sat, 05 Mar 2011 09:23:38 GMT > X-Varnish: 1282440127 > Age: 0 > Via: 1.1 varnish > Connection: keep-alive > Length: 130283 (127K) [application/x-gzip] > Saving to: `/dev/null' > > 2011-03-05 09:23:39 (417 KB/s) - `/dev/null' saved [130283/130283] > > I've reverted back to default.vcl, the only changes being to define my > own > backends. Varnishlog output is below. Having googled a bit the only > thing I've found is bug #813, but that was apparently fixed prior to > 2.1.5. Am I missing something obvious? > > > -Ronan > > > Varnishlog output > > 18 ReqStart c 66.249.66.246 63009 1282436348 > 18 RxRequest c GET > 18 RxURL c /sitemap_362.xml.gz > 18 RxProtocol c HTTP/1.1 > 18 RxHeader c Host: www.sitename.net > 18 RxHeader c Connection: Keep-alive > 18 RxHeader c Accept: */* > 18 RxHeader c From: googlebot(at)googlebot.com > 18 RxHeader c User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 18 RxHeader c Accept-Encoding: gzip,deflate > 18 RxHeader c If-Modified-Since: Sat, 05 Mar 2011 08:40:46 GMT > 18 VCL_call c recv > 18 VCL_return c lookup > 18 VCL_call c hash > 18 VCL_return c hash > 18 VCL_call c miss > 18 VCL_return c fetch > 18 Backend c 40 sitename sitename1 > 40 TxRequest b GET > 40 TxURL b /sitemap_362.xml.gz > 40 TxProtocol b HTTP/1.1 > 40 TxHeader b Host: www.sitename.net > 40 TxHeader b Accept: */* > 40 TxHeader b From: googlebot(at)googlebot.com > 40 TxHeader b User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html) > 40 TxHeader b Accept-Encoding: gzip,deflate > 40 TxHeader b X-Forwarded-For: 66.249.66.246 > 40 TxHeader b X-Varnish: 1282436348 > 40 RxProtocol b HTTP/1.1 > 40 RxStatus b 200 > 40 RxResponse b OK > 40 RxHeader b Date: Sat, 05 Mar 2011 09:17:37 GMT > 40 RxHeader b Server: Apache > 40 RxHeader b Content-Length: 130327 > 40 RxHeader b Content-Encoding: gzip > 40 RxHeader b Vary: Accept-Encoding > 40 RxHeader b Content-Type: application/x-gzip > 18 TTL c 1282436348 RFC 10 1299316657 0 0 0 0 > 18 VCL_call c fetch > 18 VCL_return c deliver > 18 ObjProtocol c HTTP/1.1 > 18 ObjStatus c 200 > 18 ObjResponse c OK > 18 ObjHeader c Date: Sat, 05 Mar 2011 09:17:37 GMT > 18 ObjHeader c Server: Apache > 18 ObjHeader c Content-Encoding: gzip > 18 ObjHeader c Vary: Accept-Encoding > 18 ObjHeader c Content-Type: application/x-gzip > 18 FetchError c straight read_error: 0 > 40 Fetch_Body b 4 4294967295 1 > 40 BackendClose b sitename1 > 18 VCL_call c error > 18 VCL_return c deliver > 18 VCL_call c deliver > 18 VCL_return c deliver > 18 TxProtocol c HTTP/1.1 > 18 TxStatus c 503 > 18 TxResponse c Service Unavailable > 18 TxHeader c Server: Varnish > 18 TxHeader c Retry-After: 0 > 18 TxHeader c Content-Type: text/html; charset=utf-8 > 18 TxHeader c Content-Length: 419 > 18 TxHeader c Date: Sat, 05 Mar 2011 09:17:38 GMT > 18 TxHeader c X-Varnish: 1282436348 > 18 TxHeader c Age: 1 > 18 TxHeader c Via: 1.1 varnish > 18 TxHeader c Connection: close > 18 Length c 419 > 18 ReqEnd c 1282436348 1299316657.660784483 > 1299316658.684726000 0.478523970 1.023897409 0.000044107 > 18 SessionClose c error > 18 StatSess c 66.249.66.246 63009 6 1 5 0 0 4 2984 32012 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From drew.smathers at gmail.com Mon Mar 7 21:58:15 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 15:58:15 -0500 Subject: Varnish still 503ing after adding grace to VCL Message-ID: Hi all, I'm trying to grace as a means of ensuring that cached content is delivered from varnish past it's TTL if backends can't generate a response. With some experiments this does not seem to happen with our setup. After an object is cached, varnish still returns a 503 within the grace period if a backend goes down. Below are details. version: varnish-2.1.4 SVN 5447M I stripped down my VCL to the following to demonstrate: backend webapp { .host = "127.0.0.1"; .port = "8000"; } sub vcl_recv { set req.backend = webapp; set req.grace = 1h; } sub vcl_fetch { set beresp.grace = 24h; } Running varnish: varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 First request: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 200 OK Server: WSGIServer/0.1 Python/2.6.6 Vary: Authorization, Accept-Language, X-Gttv-Apikey Etag: "e9c12380818a05ed40ae7df4dad67751" Content-Type: application/json; charset=utf-8 Content-Language: en Cache-Control: max-age=30 Content-Length: 425 Date: Mon, 07 Mar 2011 16:12:56 GMT X-Varnish: 377135316 377135314 Age: 6 Via: 1.1 varnish Connection: close Wait 30 seconds, kill backend app, then make another request through varnish: GET /some/path/ HTTP/1.1 Accept-Language: en HTTP/1.1 503 Service Unavailable Server: Varnish Retry-After: 0 Content-Type: text/html; charset=utf-8 Content-Length: 418 Date: Mon, 07 Mar 2011 16:14:02 GMT X-Varnish: 377135317 Age: 0 Via: 1.1 varnish Connection: close Any help or clarification on request grace would be appreciated. Thanks, -Drew From brice at digome.com Mon Mar 7 22:05:52 2011 From: brice at digome.com (Brice Burgess) Date: Mon, 07 Mar 2011 15:05:52 -0600 Subject: varnishncsa and -F option? Message-ID: <4D7548B0.9090608@digome.com> Is there a production-ready version of varnishncsa that supports the -F switch implemented 4 months ago here: http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? The -F / LogFormat switch allows for VirtualHost support -- although appears to have missed the 2.1.5 release? Thanks, ~ Brice From perbu at varnish-software.com Mon Mar 7 22:18:03 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 22:18:03 +0100 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: Hi David, List. On Mon, Mar 7, 2011 at 7:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using 100% > of the > CPU just for that. A reasonable amount of CPU to consume would be 30% ( at > most ). > So; you are left with around 150k regular expression checks per second. > I guess we should stop speculating. I wrote a short program to do in-cache pcre pattern matching. My laptop (i5 M560) seems to churn through 7M pcre matches a second so I was a bit off. The matches where anchored and small but varying it doesn't seem to affect performance much. The source for my test is here: http://pastebin.com/a68y15hp (.. ) > Compare it to a hash lookup. A hash lookup, using a good minimal perfect > hashing algorithms, > will take at most around 10 operations. Using the same math as above, that > gives around 300k > lookups per second. A hash would be roughly 500 times faster than using > if/else... > Of course a hash lookup is faster. But if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it. Of course it isn't as cool a building a hash table of functions in inline C, but is it useful when the speedup gain is lost in buffer bloat anyway? I think not. Cheers, Per. > > > On 3/7/2011 1:35 PM, Per Buer wrote: > > Hi, > > On Sun, Mar 6, 2011 at 11:39 PM, AD wrote: > >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end up being >> 100-500 includes to map to each hostname and then a long if/then block based >> on the hostname. Is there a more scalable way to deal with this? >> > > CPU and memory bandwidth is abundant on modern servers. I'm actually not > sure that having a 500 entries long if/else statement will hamper > performance at all. Remember, there will be no system calls. I would guess a > modern server will execute at least a four million regex-based if/else per > second per CPU core if most of the code and data will be in the on die > cache. So executing 500 matches should take about 0.5ms. > > It might not make sense to optimize this. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.orghttp://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.shenton at nasa.gov Mon Mar 7 22:36:41 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Mon, 7 Mar 2011 15:36:41 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? Message-ID: To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. From jhayter at manta.com Mon Mar 7 22:48:37 2011 From: jhayter at manta.com (Jim Hayter) Date: Mon, 7 Mar 2011 21:48:37 +0000 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: /* determine vhost name w/out port number */ set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); set req.http.host = req.http.newhost; This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. If you require the port number at the application level, you will have to do something a bit different to preserve it. Jim -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Shenton, Chris (HQ-LM020)[INDYNE INC] Sent: Monday, March 07, 2011 4:37 PM To: varnish-misc at varnish-cache.org Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? To accommodate our hosting environment, we need to run varnish on two different ports, but we want to make both use the same cache. That is, if I have: varnishd -a 127.0.0.1:8001,127.0.0.1:8002 I want client requests to both both 8001 and 8002 ports to share the content of the same cache. So if one client hits :8002 with a URL and another later hits :8001 with the same URL, I want the latter to retrieve the content cached by the former request. In my testing, however, it seems that this is not happening, that the doc is getting cached once per varnishd port. First on 8001, we see it's not from cache as this is the first request: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421263 Then to 8002 where I'd hope it was returned from cache, but the X-Varnish and Age headers indicate it's not: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421264 Now back to the first port, 8001, and we see it is in fact returned from cache: > $ curl -v -o /dev/null http://localhost:8001/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421265 730421263 And if I try 8002 again, it's also returned from cache: > $ curl -v -o /dev/null http://localhost:8002/ 2>&1 | grep X-Varnish: > < X-Varnish: 730421266 730421264 Is there a way to make both ports share the same hash? I'm guessing the listening port is included with the hash key so that's why it appears each URL is being stored separately. If so, is there a way to remove the listening port from the hash key the document is stored under? Or would that even work? Thanks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From straightflush at gmail.com Mon Mar 7 23:34:45 2011 From: straightflush at gmail.com (AD) Date: Mon, 7 Mar 2011 17:34:45 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: you need to enable a probe in the backend for this to work i believe. On Mon, Mar 7, 2011 at 3:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. With some experiments this does not seem to happen with our > setup. After an object is cached, varnish still returns a 503 within > the grace period if a backend goes down. Below are details. > > version: varnish-2.1.4 SVN 5447M > > I stripped down my VCL to the following to demonstrate: > > backend webapp { > .host = "127.0.0.1"; > .port = "8000"; > } > > sub vcl_recv { > set req.backend = webapp; > set req.grace = 1h; > } > > > sub vcl_fetch { > set beresp.grace = 24h; > } > > Running varnish: > > varnishd -f simple.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080 > > > First request: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 200 OK > Server: WSGIServer/0.1 Python/2.6.6 > Vary: Authorization, Accept-Language, X-Gttv-Apikey > Etag: "e9c12380818a05ed40ae7df4dad67751" > Content-Type: application/json; charset=utf-8 > Content-Language: en > Cache-Control: max-age=30 > Content-Length: 425 > Date: Mon, 07 Mar 2011 16:12:56 GMT > X-Varnish: 377135316 377135314 > Age: 6 > Via: 1.1 varnish > Connection: close > > > Wait 30 seconds, kill backend app, then make another request through > varnish: > > GET /some/path/ HTTP/1.1 > Accept-Language: en > > HTTP/1.1 503 Service Unavailable > Server: Varnish > Retry-After: 0 > Content-Type: text/html; charset=utf-8 > Content-Length: 418 > Date: Mon, 07 Mar 2011 16:14:02 GMT > X-Varnish: 377135317 > Age: 0 > Via: 1.1 varnish > Connection: close > > Any help or clarification on request grace would be appreciated. > > Thanks, > -Drew > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Mar 7 23:39:40 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Mar 2011 23:39:40 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers wrote: > Hi all, > > I'm trying to grace as a means of ensuring that cached content is > delivered from varnish past it's TTL if backends can't generate a > response. That's "Saint Mode" - please see http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode I see that there isn't too much details on the semantics there. I'll see if I can add some details. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.smathers at gmail.com Mon Mar 7 23:52:44 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Mon, 7 Mar 2011 17:52:44 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: > On Mon, Mar 7, 2011 at 9:58 PM, Drew Smathers > wrote: >> >> Hi all, >> >> I'm trying to grace as a means of ensuring that cached content is >> delivered from varnish past it's TTL if backends can't generate a >> response. > > That's "Saint Mode" - please > see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode > I see that there isn't too much details on the semantics there. I'll see if > I can add some details. Hi Per, I actually tried using saintmode for this problem but one point that I found tricky is that saintmode (as far as i can tell from docs) can only be set on beresp. If the backend is up, that's great because I can check a non-200 status in vcl_fetch() and set. But in the case of all backends being down, vcl_fetch() doesn't even get invoked and there isn't any other routine and object in the routine's execution context (that I know of) where I can set saintmode and restart. Thanks, -Drew From junxian.yan at gmail.com Tue Mar 8 06:22:13 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 21:22:13 -0800 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: I upgraded varnish to 2.1.5 and used log function to trace the req.url and found there was host name in 'req.url'. But I didn't find any more description about this format in wiki. So I have to do a regsub before entering every function. Dose it make sense? Below is varnish log, 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1299561539 1.0 12 SessionOpen c 10.0.2.130 56799 :6081 12 ReqStart c 10.0.2.130 56799 1589705637 12 RxRequest c GET 12 RxURL c http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 On Mon, Mar 7, 2011 at 7:56 AM, Junxian Yan wrote: > Hi Guys > > I encountered this issue in two different environment(env1 and env2). > The sample code is like: > in vcl_fetch() > > else if (req.url ~ "^/tables/\w{6}/summary.js") { > if (req.http.Set-Cookie !~ " u=\w") { > unset beresp.http.Set-Cookie; > set beresp.ttl = 2h; > set beresp.grace = 22h; > return(deliver); > } else { > return(pass); > } > } > > In env1, the request like > http://mytest.com/api/v2/tables/vyulrh/read.jsaml can enter lookup and > then enter fetch to create a new cache entry. Next time, the same request > will hit cache and do not do fetch anymore > In env2, the same request enter and go into vcl_fetch, the regex will fail > and can not enter deliver, so the resp will be sent to end user without > cache creating. > > I'm not sure if there is somebody has the same issue. Is it platform > related ? > > > R > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Tue Mar 8 07:20:47 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 08 Mar 2011 07:20:47 +0100 Subject: Weird "^" in the regex of varnish In-Reply-To: References: Message-ID: <4D75CABF.7020403@ruberg.no> On 03/08/2011 06:22 AM, Junxian Yan wrote: > I upgraded varnish to 2.1.5 and used log function to trace the req.url > and found there was host name in 'req.url'. But I didn't find any more > description about this format in wiki. > So I have to do a regsub before entering every function. Dose it make > sense? > > Below is varnish log, > > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1299561539 1.0 > 12 SessionOpen c 10.0.2.130 56799 :6081 > 12 ReqStart c 10.0.2.130 56799 1589705637 > 12 RxRequest c GET > 12 RxURL c > http://staging.test.com/purge/tables/vyulrh/summary.js?grid_state_id=3815 Different User-Agents send different req.url. To normalize them, see http://www.varnish-cache.org/trac/wiki/VCLExampleNormalizingReqUrl Note that technically, there's nothing wrong with using hostnames in req.url, apart from possibly storing the same object under different names. However, as you have found out, some regular expressions might not work as intended until you normalize req.url. -- Bj?rn From tfheen at varnish-software.com Tue Mar 8 07:55:24 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 07:55:24 +0100 Subject: varnishncsa and -F option? In-Reply-To: <4D7548B0.9090608@digome.com> (Brice Burgess's message of "Mon, 07 Mar 2011 15:05:52 -0600") References: <4D7548B0.9090608@digome.com> Message-ID: <8762ru3xur.fsf@qurzaw.varnish-software.com> ]] Brice Burgess | Is there a production-ready version of varnishncsa that supports the | -F | switch implemented 4 months ago here: | http://www.varnish-cache.org/trac/changeset/46b90935e56c7a448fb33342f03fdcbd14478ac2? It'll be in 3.0. | The -F / LogFormat switch allows for VirtualHost support -- although | appears to have missed the 2.1.5 release? It was never intended for or aimed at the 2.1 branch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From paul.lu81 at gmail.com Mon Mar 7 21:37:49 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Mon, 7 Mar 2011 12:37:49 -0800 Subject: A lot of if statements to handle hostnames Message-ID: Hi, I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? example pseudo code: ================================= vcl_recv(){ if(req.http.host == "www.domain1.com") { set req.backend = www_domain1_com; # more code return(lookup); } if(req.http.host == "www.domain2.com") { set req.backend = www_domain2_com; # more code return(lookup); } if(req.http.host == "www.domain3.com") { set req.backend = www_domain3_com; # more code return(lookup); } } ================================= Thank you, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Tue Mar 8 08:16:50 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Mon, 7 Mar 2011 23:16:50 -0800 Subject: should beresp will be added into cache? Message-ID: Hi Guys I added some logic to change the beresp header in vcl_fetch. And I also do lookup for the same request in vcl_recv. The handling process I expected should be: the first incoming request will be changed by fetch logic and the second request should use the cache with the changed part But the actually result is the change parts are not be cached Here is my code: in vcl_fetch if (req.url ~ "/(images|javascripts|stylesheets)/") { unset beresp.http.Set-Cookie; set beresp.http.Cache-Control = "private, max-age = 3600, must-revalidate"; # 1 hour set beresp.ttl = 10m; set beresp.http.clientcache = "1"; return(deliver); } And I also wanna the response of the second request have the max-age = 3600 and clientcache = 1. The actual result is max-age = 0 and no clientcache in response Found some explanation in varnish doc lib, seems not as exactly as I expected. Is the beresp inserted into cache totally? deliverPossibly insert the object into the cache, then deliver it to the client. Control will eventually pass to vcl_deliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From indranilc at rediff-inc.com Tue Mar 8 08:32:53 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 8 Mar 2011 07:32:53 -0000 Subject: =?utf-8?B?UmU6IEEgbG90IG9mIGlmIHN0YXRlbWVudHMgdG8gaGFuZGxlIGhvc3RuYW1lcw==?= Message-ID: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Apart from improving the construct to if ... elseif , could you please tell me the reason why you are looking for a different way? Is it only for ease of writing less statements or is there some other reason you foresee? I am asking because we also have a number of similar construct in our vcl. Thanks. Thanks, Neel On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu <paul.lu81 at gmail.com> wrote >Hi, > >I have to work with a lot of domain names in my varnish config and I was wondering if there is an easier to way to match the hostname other than a series of if statements. Is there anything like a hash? Or does anybody have any C code to do this? > >example pseudo code: >================================= >vcl_recv(){ > > if(req.http.host == "www.domain1.com") > { > set req.backend = www_domain1_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain2.com") > { > set req.backend = www_domain2_com; > # more code > return(lookup); > } > if(req.http.host == "www.domain3.com") > { > set req.backend = www_domain3_com; > # more code > return(lookup); > } >} >================================= > >Thank you, >Paul > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Mar 8 08:39:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:39:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Mon, 07 Mar 2011 08:02:27 EST." <4D74D763.706@sbgnet.com> Message-ID: <39035.1299569948@critter.freebsd.dk> In message <4D74D763.706 at sbgnet.com>, David Helkowski writes: >The best way would be to use a jump table. >By that, I mean to make multiple subroutines in C, and then to jump to >the different subroutines by looking >up pointers to the subroutines using a string hashing/lookup system. The sheer insanity of this proposal had me wondering which vending machine gave you a CS degree instead of the cola you ordered. But upon reading: >I attempted to do this myself when I first started using >varnish, but I was having problems with varnish crashing >when attempting to use the code I wrote in C. There may be >limitations to the C code that can be used. I realized that you're probably just some troll trying to have a bit of a party here on our mailing list, or possibly some teenager in his mothers basement, from where you "rulez teh w0rld" because he is quite clearly Gods Gift To Computers. Or quite likely both. The fact that you have to turn to Wikipedia to find out how many instructions a contemporary CPU can execute per second, and then get the answer wrong by about an order of magnitude makes me almost sad for you. But you may have a future in you still, but there are a lot of good books you will have read to unlock it. I would recommend you start out with "The Mythical Man Month", and continue with pretty much anything Kernighan has written on the subject of programming. At some point, you will understand what Dijkstra is talking about here: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html Until then, you should not attempt to do anything with a computer that could harm other people. And now: Please shut up before I mock you. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Tue Mar 8 08:41:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 07:41:26 +0000 Subject: should beresp will be added into cache? In-Reply-To: Your message of "Mon, 07 Mar 2011 23:16:50 PST." Message-ID: <39065.1299570086@critter.freebsd.dk> In message , Junx ian Yan writes: >Here is my code: >in vcl_fetch > if (req.url ~ "/(images|javascripts|stylesheets)/") { > unset beresp.http.Set-Cookie; > set beresp.http.Cache-Control = "private, max-age = 3600, >must-revalidate"; # 1 hour > set beresp.ttl = 10m; > set beresp.http.clientcache = "1"; > return(deliver); > } > >And I also wanna the response of the second request have the max-age = 3600 >and clientcache = 1. The actual result is max-age = 0 and no clientcache in >response Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From junxian.yan at gmail.com Tue Mar 8 09:23:06 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Tue, 8 Mar 2011 00:23:06 -0800 Subject: should beresp will be added into cache? In-Reply-To: <39065.1299570086@critter.freebsd.dk> References: <39065.1299570086@critter.freebsd.dk> Message-ID: Actually, I need to set clientcache in fetch. But seems varnish can not add this attribute into cache list. On Mon, Mar 7, 2011 at 11:41 PM, Poul-Henning Kamp wrote: > In message , > Junx > ian Yan writes: > > >Here is my code: > >in vcl_fetch > > if (req.url ~ "/(images|javascripts|stylesheets)/") { > > unset beresp.http.Set-Cookie; > > set beresp.http.Cache-Control = "private, max-age = 3600, > >must-revalidate"; # 1 hour > > set beresp.ttl = 10m; > > set beresp.http.clientcache = "1"; > > return(deliver); > > } > > > >And I also wanna the response of the second request have the max-age = > 3600 > >and clientcache = 1. The actual result is max-age = 0 and no clientcache > in > >response > > Wouldn't it be easier to set the Cache-Control in vcl_deliver then ? > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhelkowski at sbgnet.com Tue Mar 8 14:03:47 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:03:47 -0500 Subject: Lots of configs In-Reply-To: <39035.1299569948@critter.freebsd.dk> References: <39035.1299569948@critter.freebsd.dk> Message-ID: <4D762933.5080905@sbgnet.com> To write this sort of message, and to the list no doubt, is nothing short of immature. In so much as what I said caused such a response, I apologize for those having bothered to read this. That said, I am going to response to the points made. I would appreciate a 3rd party ( well a 4th at this point ), who has more experience and maturity, would chip in and provide some order to this discussion. On 3/8/2011 2:39 AM, Poul-Henning Kamp wrote: > In message<4D74D763.706 at sbgnet.com>, David Helkowski writes: > >> The best way would be to use a jump table. >> By that, I mean to make multiple subroutines in C, and then to jump to >> the different subroutines by looking >> up pointers to the subroutines using a string hashing/lookup system. > The sheer insanity of this proposal had me wondering which vending > machine gave you a CS degree instead of the cola you ordered. They don't teach jump tables in any college I know of. I believe I first learned about them in my own readings of 'Peter Norton's Assembly Language'; a book I first read perhaps about 15 years ago. I still have the book on the shelf. I don't think Peter Norton would ever call an ingenious solution to a challenging problem 'sheer insanity'. He would very likely laugh at the simplicity of what I am suggesting. > But upon reading: > >> I attempted to do this myself when I first started using >> varnish, but I was having problems with varnish crashing >> when attempting to use the code I wrote in C. There may be >> limitations to the C code that can be used. > I realized that you're probably just some troll trying to have > a bit of a party here on our mailing list, or possibly some teenager > in his mothers basement, from where you "rulez teh w0rld" because > he is quite clearly Gods Gift To Computers. This is called an Ad hominen attack. Belittling those you interact with in no way betters your opinion. I am also not sure why this is a response to what you quoted me on. I wrote what I did because I am actually curious if someone has time and effort to get hash tables working in VCL. I would to see a working rendition of it. I didn't really spend much time attempting to make it work, because my own usage of VCL didn't end up requiring it. That is, my statement here is an admission of my own lack of knowledge of the limitations of inline C in VCL. I am not trolling and would seriously like to see working hash tables. > Or quite likely both. > > The fact that you have to turn to Wikipedia to find out how many > instructions a contemporary CPU can execute per second, and then > get the answer wrong by about an order of magnitude makes me almost > sad for you. I will test your code and write a subroutine demonstrating the reality of the numbers I have quoted. Once I have done that I will respond to this statement. > But you may have a future in you still, but there are a lot of good > books you will have read to unlock it. > > I would recommend you start out with "The Mythical Man Month", and > continue with pretty much anything Kernighan has written on the > subject of programming. I have read many discussions on the book in question, and am quite familiar with the writing of Kernighan and Ritchie. They are well written authors on the C language. Their methodologies are also outdated. Their book a on C is over 20 years old at this point. Obviously good information doesn't expire, but a lot of good things have been learned since then. I am not interested in playing knowledge based games. Programming is not a trivia game; it is about applying workable solutions to real world problems in an efficient manner. > At some point, you will understand what Dijkstra is talking about here: > > http://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/EWD117.html No doubt this is a well written piece that bears a response of its own. I am not going to respond to this link with any detail at the moment, because you haven't bothered to explain the purpose of putting it here; other than to link to something more well written than your own childish attack. > Until then, you should not attempt to do anything with a computer > that could harm other people. I hardly see how answering a request for the right way to do something with the appropriate correct way is something that will harm. It is up to the reader to decide what method they which to use. Also, I am concerned with your lack of confidence in other users of Varnish. I think that there are many learned users of it, and a good number of them are quite capable of taking my hash table suggestion and making it a usable reality. Once it is a reality it could easily be used by other less experienced users of Varnish. How is having an open discussion about an efficient solution to a recurring problem harmful? > And now: Please shut up before I mock you. If you wish to mock; feel free. I would prefer if you send me a direct email and do not send such nonsense to the list, nor to other uninvolved parties. > Poul-Henning > From dhelkowski at sbgnet.com Tue Mar 8 14:20:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 08:20:01 -0500 Subject: Lots of configs In-Reply-To: <4D752966.7000203@sbgnet.com> References: <4D752966.7000203@sbgnet.com> Message-ID: <4D762D01.3030209@sbgnet.com> First off, I would like to thank Per Buer for pointing out that I am off by a factor of 1000 in the following statements. I have corrected for that below so that my statements are more clear. My mistake was in considering modern processors as 2 megahertz instead of 2 gigahertz. On 3/7/2011 1:52 PM, David Helkowski wrote: > A modern CPU can run, at most, around 10 million -assembly based- > instructions per second. Make that 10 billion. The math I am using is 5 x 2 gigahertz. > See http://en.wikipedia.org/wiki/Instructions_per_second > A regular expression compare is likely at least 20 or so assembly > instructions. > That gives around 500,000 regular expression compares if you are using > 100% of the > CPU just for that. A reasonable amount of CPU to consume would be 30% > ( at most ). > So; you are left with around 150k regular expression checks per second. The correct numbers here are 500 million. A regular expression compare more likely takes 40 assembly instructions, so I am going to cut this to 250 million. LIkewise, at 30%, that leads to about 80 million. > > Lets suppose there are 500 different domains. On average, you will be > doing 250 if/else > checks per call. 150k / 250 = 600. The new number is 80 million / 250 = 320k > That means that you will get, under fair conditions, a max > of about 600 hits per second. 320,000 hits per second. Obviously, no server is capable of serving up such a number. Just using regular expressions in a cascading if/then will work fine in this case. My apologies for the confusion in this regard. What I can see is a server serving around 10,000 hits per second. That would require about 30x the number of domains. You don't really want to eat up CPU usage for just if/then though, so probably at around 10x the number of domains you'd want to switch to a hash table. So; correcting my conclusion; if you are altering configuration for 5000 domains, then you are going to need a hash table. Otherwise you are going to be fine just using a cascading if/then, despite it being ugly. > The person asking the question likely has 500 domains running. > That gives a little over 1 hit possible per second per domain. Do you > think that is an acceptable > solution for this person? I think not. > > Compare it to a hash lookup. A hash lookup, using a good minimal > perfect hashing algorithms, > will take at most around 10 operations. Using the same math as above, > that gives around 300k > lookups per second. A hash would be roughly 500 times faster than > using if/else... Note that despite my being off by a factor of 1000, the multiplication still holds out. If you use a hash table, even with only 500 domains, a hash table will -still- be 500 times faster. I still think it would be great to have a hash table solution available for use in VCL. > > On 3/7/2011 1:35 PM, Per Buer wrote: >> Hi, >> >> On Sun, Mar 6, 2011 at 11:39 PM, AD > > wrote: >> >> >> what is the best way to run an instance of varnish that may need >> different vcl configurations for each hostname. This could end >> up being 100-500 includes to map to each hostname and then a long >> if/then block based on the hostname. Is there a more scalable >> way to deal with this? >> >> >> CPU and memory bandwidth is abundant on modern servers. I'm actually >> not sure that having a 500 entries long if/else statement will hamper >> performance at all. Remember, there will be no system calls. I would >> guess a modern server will execute at least a four million >> regex-based if/else per second per CPU core if most of the code and >> data will be in the on die cache. So executing 500 matches should >> take about 0.5ms. >> >> It might not make sense to optimize this. >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Tue Mar 8 15:01:03 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 08 Mar 2011 15:01:03 +0100 Subject: Lots of configs In-Reply-To: <4D762933.5080905@sbgnet.com> (David Helkowski's message of "Tue, 08 Mar 2011 08:03:47 -0500") References: <39035.1299569948@critter.freebsd.dk> <4D762933.5080905@sbgnet.com> Message-ID: <87d3m11zkw.fsf@qurzaw.varnish-software.com> ]] David Helkowski (if you could add a blank line between quoted text and your own addition that makes it much easier to read your replies) Hi, | This is called an Ad hominen attack. Belittling those you interact | with in no way betters your opinion. I am also not sure why this is a | response to what you quoted me on. I wrote what I did because I am | actually curious if someone has time and effort to get hash tables | working in VCL. I would to see a working rendition of it. I didn't | really spend much time attempting to make it work, because my own | usage of VCL didn't end up requiring it. We'll probably end up implementing hash tables in a vmod at some point, but it's not anywhere near the top of the todo list. What we've been discussing so far would probably not have been useful for your use case above, though. As for doing 3-500 regex or string matches per request that's hardly a big problem for us as Per's numbers demonstrate. cheers, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From phk at phk.freebsd.dk Tue Mar 8 15:53:20 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 14:53:20 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:27:29 +0400." Message-ID: <88534.1299596000@critter.freebsd.dk> In message , Jona than DeMello writes: >Poul simply comes across as a nervous child, throwing every superiority >imposing cliche out there because he thought a team member was >'threatened'. I received a couple of complaints about flames (on and off list) originating from David, and after reading his contribution, decided that he was not worth the bother, and decided to call his bullshit and get it over with. "Jump Tables" was a very neat concept, about 25-30 years ago, when people tried to squeeze every bit of performance out of a 4.77MHz i8088 chip in a IBM PC. They are however just GOTO in disguise and they have all the disadvantages of GOTO, without, and this is important: without _any_ benefits at all on a modern pipelined and deeply cache starved CPU. That's why I pointed David at Dijkstra epistle and other literature for building moral character as a programmer. If David had come up with a valid point or a good suggestion, then I would possibly tolerate a minimum of behavioural problems from him. But suggesting we abandon 50 years of progress towards structured programming, and use GOTOs to solve a nonexistant problem, for which there are perfectly good and sensible methods, should it materialize, just because he saw a neat trick in an old book and wanted to show of his skillz, earns him no right to flame people in this project. And that's the end of that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From chris.shenton at nasa.gov Tue Mar 8 18:11:39 2011 From: chris.shenton at nasa.gov (Shenton, Chris (HQ-LM020)[INDYNE INC]) Date: Tue, 8 Mar 2011 11:11:39 -0600 Subject: varnishd -a addr:8001,addr:8002 -- Share same cache? In-Reply-To: References: Message-ID: On Mar 7, 2011, at 4:48 PM, Jim Hayter wrote: > In my environment, port numbers may be on the request, but are not needed to respond nor should they influence the cache. In my vcl_recv, I have the following lines: > > /* determine vhost name w/out port number */ > set req.http.newhost = regsub(req.http.host, "([^:]*)(:.*)?$", "\1"); > set req.http.host = req.http.newhost; > > This strips off the port number from the host name in the request. Doing it this way, the port number is discarded and NOT passed on to the application. It is also not present when creating and looking up hash entries. This looks like it does exactly what we need. I thought I was going have to monkey with server.port, or what the vcl_hash includes in its key calculation, but this is straight-forward. Thanks a lot, Jim. From dhelkowski at sbgnet.com Tue Mar 8 20:50:57 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 08 Mar 2011 14:50:57 -0500 Subject: Lots of configs In-Reply-To: <88534.1299596000@critter.freebsd.dk> References: <88534.1299596000@critter.freebsd.dk> Message-ID: <4D7688A1.8020202@sbgnet.com> On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > In message, Jona > than DeMello writes: > >> Poul simply comes across as a nervous child, throwing every superiority >> imposing cliche out there because he thought a team member was >> 'threatened'. > I received a couple of complaints about flames (on and off list) > originating from David, and after reading his contribution, decided > that he was not worth the bother, and decided to call his bullshit > and get it over with. I will admit to writing one email angrily responding to Per Buer. My anger was due primarily to the statement "if you got to deploy a whole bunch of scary inline C that will seriously intimidate the summer intern and makes all the other fear the config it's just not worth it." The contents of that private email essentially boiled to me saying, in many more words: "not everyone is as stupid as you". Now, I agree that was distasteful, but it isn't much different than you stating you are 'calling my bullshit'. I am not quite sure why you, and others, have decided that this is a pissing match. Also; if it helps anything; I apologize for my ranting email to Per Buer. It was certainly over the line. I am sorry for going off on that. I have my reasons but I would still like to have a meaningful discussion. Per Buer, the very person I ticked off, admitted that a hash lookup is certainly faster. Other people are expressing interested in having a hash system in place with VCL. I myself am even willing to write the system. Sure I may be obnoxious at times in my presentation of what I want done, but I hardly thing it calls for your response or arrogant counter-attitude. > "Jump Tables" was a very neat concept, about 25-30 years ago, when > people tried to squeeze every bit of performance out of a 4.77MHz > i8088 chip in a IBM PC. Jump tables, and gotos, are still perfectly usable on modern system. Good techniques, in their proper place, don't expire. Hash tables for instance certainly have not been replaced by cascading 'if else' structures. Note that I am suggesting hash tables combined with jump tables. I don't see any legitimate objection to such an idea. > They are however just GOTO in disguise and they have all the > disadvantages of GOTO, without, and this is important: without _any_ > benefits at all on a modern pipelined and deeply cache starved CPU. So we should continue using cascading 'if else'? That is _very_ efficient on modern CPU architecture? ... > That's why I pointed David at Dijkstra epistle and other literature > for building moral character as a programmer. Yeah... speaking of that; I read the beginning of the article at the very least. It immediately starts talking about code elegance and the purity of solutions. If anything, it leans very heavily towards hash tables as opposed to long cascading 'if else'. > If David had come up with a valid point or a good suggestion, then > I would possibly tolerate a minimum of behavioural problems from him. How is 'can we please use hash tables' not a valid point and suggestion? > But suggesting we abandon 50 years of progress towards structured > programming, and use GOTOs to solve a nonexistant problem, for which > there are perfectly good and sensible methods, should it materialize, > just because he saw a neat trick in an old book and wanted to show > of his skillz, earns him no right to flame people in this project. Perfectly good and sensible methods such as what? 500 cascading 'if else' for each call? Are you seriously suggesting that is a technique honed to perfection in the last 50 years that is based on structured programming? I read about jump tables and hashing many many years ago. It is hardly a neat trick I recently dug out of an old book. Let me ask you this: have you heard of Bob Jenkins? Would you say his analysis of hash tables is outdated and meaningless? In regard to showing off skills; I could really care less what you or anyone else think of my coding skills. I responded to the initial question because I wanted to honestly point people towards a better solution to a recurring problem that has been mentioned in the list. Your last statement implies people can 'earn' the right to flame. ? Is that what you are doing? Using your 'earned' right to flame me? > And that's the end of that. Having the last word is something given to the victor. Arbitrarily declaring your statements to be the last word is pretty arrogant. > Poul-Henning > From drew.smathers at gmail.com Tue Mar 8 21:34:48 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 15:34:48 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Mon, Mar 7, 2011 at 5:52 PM, Drew Smathers wrote: > On Mon, Mar 7, 2011 at 5:39 PM, Per Buer wrote: [snip] >> >> That's "Saint Mode" - please >> see?http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html#saint-mode >> I see that there isn't too much details on the semantics there. I'll see if >> I can add some details. > > Hi Per, > > I actually tried using saintmode for this problem but one point that I > found tricky is that saintmode (as far as i can tell from docs) can > only be set on beresp. If the backend is up, that's great because I > can check a non-200 status in vcl_fetch() and set. But in the case of > all backends being down, vcl_fetch() doesn't even get invoked and > there isn't any other routine and object in the routine's execution > context (that I know of) where I can set saintmode and restart. > Sorry to bump my own thread, but does anyone know of a way to set saintmode if a backend is down, vs. up and misbehaving (returning 500, etc)? Also, I added a backend probe and this indeed caused grace to kick in once the probe determined the backend as sick.I think the docs should be clarified if this isn't a bug (grace not working without probe): http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Finally it's somewhat disconcerting that in the interim between a cache expiry and before varnish determines a backend as down (sick) it will 503 - so this could affect many clients during that window. Ideally, I'd like to successfully service requests if there's an object in the cache - period - but I guess this isn't possible now with varnish? Thanks, -Drew From ronan at iol.ie Tue Mar 8 21:38:08 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 20:38:08 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs Message-ID: I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 From perbu at varnish-software.com Tue Mar 8 21:51:55 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 8 Mar 2011 21:51:55 +0100 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: Hi Drew, list. On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers wrote: > Sorry to bump my own thread, but does anyone know of a way to set > saintmode if a backend is down, vs. up and misbehaving (returning 500, > etc)? > > Also, I added a backend probe and this indeed caused grace to kick in > once the probe determined the backend as sick.I think the docs should > be clarified if this isn't a bug (grace not working without probe): > > http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers Check out the trunk version of the docs. Committed some earlier today. > Finally it's somewhat disconcerting that in the interim between a > cache expiry and before varnish determines a backend as down (sick) it > will 503 - so this could affect many clients during that window. > Ideally, I'd like to successfully service requests if there's an > object in the cache - period - but I guess this isn't possible now > with varnish? > Actually it is. In the docs there is a somewhat dirty trick where set a marker in vcl_error, restart and pick up on the error and switch backend to one that is permanetly down. Grace kicks in and serves the stale content. Sometime post 3.0 there will be a refactoring of the whole vcl_error handling and we'll end up with something a bit more elegant. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Tue Mar 8 22:54:57 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 8 Mar 2011 16:54:57 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> I would look at setting a fail director. Restart if there is a 503, and if restarts > 0 select the patient director with very generous health checking. Your timeouts are reasonable, but try .timeout 20s and .threshold 1 for the patient director. Having a different view of the backends usually deals with occasional 503s. Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Tue Mar 8 22:56:51 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 8 Mar 2011 13:56:51 -0800 Subject: Lots of configs In-Reply-To: <4D7688A1.8020202@sbgnet.com> References: <88534.1299596000@critter.freebsd.dk> <4D7688A1.8020202@sbgnet.com> Message-ID: An O(1) solution (e.g., a hash table) is a perfectly valid optimization of an O(N) solution. But you are confusing an O(N) solution with an O(N) problem. If the O(N) solution *in actual bona fide reality *becomes a problem for someone's use-case, I'm sure that an O(1) solution can be implemented as necessary. If *enough *someones need this O(1) solution, then it will begin to show up on this project's official radar as a potential built-in VCL feature or vmod. It's that simple. If anyone else here wants to continue pettifogging with you, please let them elect to email you directly, rather than sharing this debate with those of us who don't. It will substantiate the character and experience that you profess to have. Cheers, -- kb On Tue, Mar 8, 2011 at 11:50, David Helkowski wrote: > On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: > >> In message, >> Jona >> than DeMello writes: >> >> Poul simply comes across as a nervous child, throwing every superiority >>> imposing cliche out there because he thought a team member was >>> 'threatened'. >>> >> I received a couple of complaints about flames (on and off list) >> originating from David, and after reading his contribution, decided >> that he was not worth the bother, and decided to call his bullshit >> and get it over with. >> > > I will admit to writing one email angrily responding to Per Buer. > My anger was due primarily to the statement "if you got to deploy a whole > bunch of scary inline C that will seriously intimidate the summer intern and > makes all the other fear the config it's just not worth it." > > The contents of that private email essentially boiled to me saying, in many > more words: > "not everyone is as stupid as you". > Now, I agree that was distasteful, but it isn't much different than you > stating you are > 'calling my bullshit'. > I am not quite sure why you, and others, have decided that this is a > pissing match. > > Also; if it helps anything; I apologize for my ranting email to Per Buer. > It was certainly > over the line. I am sorry for going off on that. I have my reasons but I > would still like > to have a meaningful discussion. > > Per Buer, the very person I ticked off, admitted that a hash lookup is > certainly faster. > Other people are expressing interested in having a hash system in place > with VCL. > I myself am even willing to write the system. > Sure I may be obnoxious at times in my presentation of what I want done, > but I hardly > thing it calls for your response or arrogant counter-attitude. > > "Jump Tables" was a very neat concept, about 25-30 years ago, when >> people tried to squeeze every bit of performance out of a 4.77MHz >> i8088 chip in a IBM PC. >> > > Jump tables, and gotos, are still perfectly usable on modern system. Good > techniques, > in their proper place, don't expire. Hash tables for instance certainly > have not been > replaced by cascading 'if else' structures. > > Note that I am suggesting hash tables combined with jump tables. I don't > see any > legitimate objection to such an idea. > > They are however just GOTO in disguise and they have all the >> disadvantages of GOTO, without, and this is important: without _any_ >> benefits at all on a modern pipelined and deeply cache starved CPU. >> > > So we should continue using cascading 'if else'? That is _very_ efficient > on modern > CPU architecture? ... > > That's why I pointed David at Dijkstra epistle and other literature >> for building moral character as a programmer. >> > > Yeah... speaking of that; I read the beginning of the article at the very > least. It immediately starts > talking about code elegance and the purity of solutions. If anything, it > leans very > heavily towards hash tables as opposed to long cascading 'if else'. > > If David had come up with a valid point or a good suggestion, then >> I would possibly tolerate a minimum of behavioural problems from him. >> > > How is 'can we please use hash tables' not a valid point and suggestion? > > But suggesting we abandon 50 years of progress towards structured >> programming, and use GOTOs to solve a nonexistant problem, for which >> there are perfectly good and sensible methods, should it materialize, >> just because he saw a neat trick in an old book and wanted to show >> of his skillz, earns him no right to flame people in this project. >> > Perfectly good and sensible methods such as what? 500 cascading 'if else' > for each > call? Are you seriously suggesting that is a technique honed to perfection > in the last > 50 years that is based on structured programming? > > I read about jump tables and hashing many many years ago. It is hardly a > neat trick > I recently dug out of an old book. Let me ask you this: have you heard of > Bob Jenkins? > Would you say his analysis of hash tables is outdated and meaningless? > > In regard to showing off skills; I could really care less what you or > anyone else think > of my coding skills. I responded to the initial question because I wanted > to honestly > point people towards a better solution to a recurring problem that has been > mentioned > in the list. > > Your last statement implies people can 'earn' the right to flame. ? Is that > what you are > doing? Using your 'earned' right to flame me? > > And that's the end of that. >> > > Having the last word is something given to the victor. Arbitrarily > declaring your statements > to be the last word is pretty arrogant. > >> Poul-Henning >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Tue Mar 8 23:06:41 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Tue, 8 Mar 2011 22:06:41 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, > > wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Tue Mar 8 23:11:39 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Tue, 8 Mar 2011 16:11:39 -0600 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: Message-ID: <8371073562863633333@unknownmsgid> On 8 Mar 2011, at 16:07, Ben Dodd wrote: Hello, This is only to add we've been experiencing exactly the same issue and are desperately searching for a solution. Can anyone help? Thanks, Ben On 8 Mar 2011, at 21:55, < varnish-misc-request at varnish-cache.org> wrote: -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 3:38 PM To: varnish-misc at varnish-cache.org Subject: Varnish 503ing on ~1/100 POSTs I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. The current load is light (up to a couple of thousand active sessions, peak is around five thousand). Varnish has a fairly simple config with a director consisting of two Apache backends: backend backend1 { .host = "1.2.3.4"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 1.2.3.4" "Connection: close"; } } backend backend2 { .host = "5.6.7.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 90s; .between_bytes_timeout = 90s; .probe = { .timeout = 5s; .interval = 5s; .window = 5; .threshold = 3; .request = "HEAD /favicon.ico HTTP/1.0" "X-Forwarded-For: 5.6.7.8" "Connection: close"; } } The numbers are modest, but significant - about 1 POST in a hundred fails. I've upped the backend timeouts to 90 seconds (first_byte / between_bytes) and I'm pretty confident they're responding in well under that time. varnishlog does not show any backend health changes. A typical event looks like: Varnish: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 2623 Backend: a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 The POST appears to work fine on the backend but the user gets a 503 from Varnish. It's not unusual to see users getting the error several times in a row (presumably re-submitting the post): a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" 503 2623 A typical request is below. The first attempt fails with: 33 FetchError c http first read error: -1 0 (Success) there is presumably a restart and the second attempt (sometimes to backend1, sometimes backend2) fails with: 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This pattern has been the same on the few transactions I've examined in detail. The full log output of a typical request is below. I'm stumped. Has anybody got any ideas what might be causing this? -Ronan 33 RxRequest c POST 33 RxURL c /ajax.php 33 RxProtocol c HTTP/1.1 33 RxHeader c Accept: */* 33 RxHeader c Accept-Language: nl-be 33 RxHeader c Referer: http://www.redcafe.net/ 33 RxHeader c x-requested-with: XMLHttpRequest 33 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 33 RxHeader c Accept-Encoding: gzip, deflate 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) 33 RxHeader c Host: www.sitename.net 33 RxHeader c Content-Length: 82 33 RxHeader c Connection: Keep-Alive 33 RxHeader c Cache-Control: no-cache 33 RxHeader c Cookie: ... 33 VCL_call c recv 33 VCL_return c pass 33 VCL_call c hash 33 VCL_return c hash 33 VCL_call c pass 33 VCL_return c pass 33 Backend c 44 backend backend1 44 TxRequest b POST 44 TxURL b /ajax.php 44 TxProtocol b HTTP/1.1 44 TxHeader b Accept: */* 44 TxHeader b Accept-Language: nl-be 44 TxHeader b Referer: http://www.sitename.net/ 44 TxHeader b x-requested-with: XMLHttpRequest 44 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 44 TxHeader b Host: www.sitename.net 44 TxHeader b Content-Length: 82 44 TxHeader b Cache-Control: no-cache 44 TxHeader b Cookie: ... 44 TxHeader b Accept-Encoding: gzip 44 TxHeader b X-Forwarded-For: a.b.c.d 44 TxHeader b X-Varnish: 657185708 * 33 FetchError c http first read error: -1 0 (Success) 44 BackendClose b backend1 33 Backend c 47 backend backend2 47 TxRequest b POST 47 TxURL b /ajax.php 47 TxProtocol b HTTP/1.1 47 TxHeader b Accept: */* 47 TxHeader b Accept-Language: nl-be 47 TxHeader b Referer: http://www.sitename.net/ 47 TxHeader b x-requested-with: XMLHttpRequest 47 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) 47 TxHeader b Host: www.sitename.net 47 TxHeader b Content-Length: 82 47 TxHeader b Cache-Control: no-cache 47 TxHeader b Cookie: ... 47 TxHeader b Accept-Encoding: gzip 47 TxHeader b X-Forwarded-For: a.b.c.d 47 TxHeader b X-Varnish: 657185708 * 33 FetchError c backend write error: 11 (Resource temporarily unavailable) 47 BackendClose b backend2 33 VCL_call c error 33 VCL_return c deliver 33 VCL_call c deliver 33 VCL_return c deliver 33 TxProtocol c HTTP/1.1 33 TxStatus c 503 33 TxResponse c Service Unavailable 33 TxHeader c Server: Varnish 33 TxHeader c Retry-After: 0 33 TxHeader c Content-Type: text/html; charset=utf-8 33 TxHeader c Content-Length: 2623 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT 33 TxHeader c X-Varnish: 657185708 33 TxHeader c Age: 3 33 TxHeader c Via: 1.1 varnish 33 TxHeader c Connection: close 33 Length c 2623 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 33 SessionClose c error 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 ------------------------------ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Whilst this may not be a fix to a possible bug in varnish have you tried switching posts to pipe instead of pass? Stew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Tue Mar 8 23:32:29 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:32:29 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <8371073562863633333@unknownmsgid> References: <8371073562863633333@unknownmsgid> Message-ID: On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan From phk at phk.freebsd.dk Tue Mar 8 23:36:18 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 22:36:18 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 14:50:57 EST." <4D7688A1.8020202@sbgnet.com> Message-ID: <7141.1299623778@critter.freebsd.dk> In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ronan at iol.ie Tue Mar 8 23:42:04 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 8 Mar 2011 22:42:04 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902C01057D6E@TMG-EVS02.torstar.net> Message-ID: Hi Stefan, On Tue, 8 Mar 2011, Caunter, Stefan wrote: > I would look at setting a fail director. Restart if there is a 503, and > if restarts > 0 select the patient director with very generous health > checking. Your timeouts are reasonable, but try .timeout 20s and > .threshold 1 for the patient director. Having a different view of the > backends usually deals with occasional 503s. Thanks for your email. Unfortunately as a varnish newbie most of it went right over my head. Are you suggesting I make changes to the health check probes to see if they will up/down backends more aggressively? I would be surprised if there are underlying health issues with the back end. The site has been running fine under everything but the heaviest of loads using pound as the front end for the past couple of years, and the backend log entries I've looked at suggest that apache is processing the POSTs fine, it's varnish that's returning the error. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 3:38 PM > To: varnish-misc at varnish-cache.org > Subject: Varnish 503ing on ~1/100 POSTs > > I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin > website. > The current load is light (up to a couple of thousand active sessions, > peak is around five thousand). Varnish has a fairly simple config with > a director consisting of two Apache backends: > > backend backend1 { > .host = "1.2.3.4"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 1.2.3.4" > "Connection: close"; > } > } > > backend backend2 { > .host = "5.6.7.8"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 90s; > .between_bytes_timeout = 90s; > .probe = { > .timeout = 5s; > .interval = 5s; > .window = 5; > .threshold = 3; > .request = > "HEAD /favicon.ico HTTP/1.0" > "X-Forwarded-For: 5.6.7.8" > "Connection: close"; > } > } > > The numbers are modest, but significant - about 1 POST in a hundred > fails. > I've upped the backend timeouts to 90 seconds (first_byte / > between_bytes) > and I'm pretty confident they're responding in well under that time. > > varnishlog does not show any backend health changes. A typical event > looks like: > > Varnish: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > http://www.sitename.net/newreply.php?do=postreply&t=285227 HTTP/1.1" 503 > 2623 > > Backend: > a.b.c.d - - [08/Mar/2011:14:48:03 +0000] "POST > /newreply.php?do=postreply&t=285227 HTTP/1.1" 200 2686 > > The POST appears to work fine on the backend but the user gets a 503 > from > Varnish. It's not unusual to see users getting the error several times > in > a row (presumably re-submitting the post): > > a.b.c.d - - [08/Mar/2011:18:21:23 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:36 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > a.b.c.d - - [08/Mar/2011:18:21:50 +0000] "POST > http://www.sitename.net/editpost.php?do=updatepost&p=9405408 HTTP/1.1" > 503 2623 > > A typical request is below. The first attempt fails with: > > 33 FetchError c http first read error: -1 0 (Success) > > there is presumably a restart and the second attempt (sometimes to > backend1, sometimes backend2) fails with: > > 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > > This pattern has been the same on the few transactions I've examined in > detail. The full log output of a typical request is below. > > I'm stumped. Has anybody got any ideas what might be causing this? > > > -Ronan > > > 33 RxRequest c POST > 33 RxURL c /ajax.php > 33 RxProtocol c HTTP/1.1 > 33 RxHeader c Accept: */* > 33 RxHeader c Accept-Language: nl-be > 33 RxHeader c Referer: http://www.redcafe.net/ > 33 RxHeader c x-requested-with: XMLHttpRequest > 33 RxHeader c Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 33 RxHeader c Accept-Encoding: gzip, deflate > 33 RxHeader c User-Agent: Mozilla/4.0 (compatible; ...) > 33 RxHeader c Host: www.sitename.net > 33 RxHeader c Content-Length: 82 > 33 RxHeader c Connection: Keep-Alive > 33 RxHeader c Cache-Control: no-cache > 33 RxHeader c Cookie: ... > 33 VCL_call c recv > 33 VCL_return c pass > 33 VCL_call c hash > 33 VCL_return c hash > 33 VCL_call c pass > 33 VCL_return c pass > 33 Backend c 44 backend backend1 > 44 TxRequest b POST > 44 TxURL b /ajax.php > 44 TxProtocol b HTTP/1.1 > 44 TxHeader b Accept: */* > 44 TxHeader b Accept-Language: nl-be > 44 TxHeader b Referer: http://www.sitename.net/ > 44 TxHeader b x-requested-with: XMLHttpRequest > 44 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 44 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 44 TxHeader b Host: www.sitename.net > 44 TxHeader b Content-Length: 82 > 44 TxHeader b Cache-Control: no-cache > 44 TxHeader b Cookie: ... > 44 TxHeader b Accept-Encoding: gzip > 44 TxHeader b X-Forwarded-For: a.b.c.d > 44 TxHeader b X-Varnish: 657185708 > * 33 FetchError c http first read error: -1 0 (Success) > 44 BackendClose b backend1 > 33 Backend c 47 backend backend2 > 47 TxRequest b POST > 47 TxURL b /ajax.php > 47 TxProtocol b HTTP/1.1 > 47 TxHeader b Accept: */* > 47 TxHeader b Accept-Language: nl-be > 47 TxHeader b Referer: http://www.sitename.net/ > 47 TxHeader b x-requested-with: XMLHttpRequest > 47 TxHeader b Content-Type: application/x-www-form-urlencoded; > charset=UTF-8 > 47 TxHeader b User-Agent: Mozilla/4.0 (compatible; ...) > 47 TxHeader b Host: www.sitename.net > 47 TxHeader b Content-Length: 82 > 47 TxHeader b Cache-Control: no-cache > 47 TxHeader b Cookie: ... > 47 TxHeader b Accept-Encoding: gzip > 47 TxHeader b X-Forwarded-For: a.b.c.d > 47 TxHeader b X-Varnish: 657185708 > * 33 FetchError c backend write error: 11 (Resource temporarily > unavailable) > 47 BackendClose b backend2 > 33 VCL_call c error > 33 VCL_return c deliver > 33 VCL_call c deliver > 33 VCL_return c deliver > 33 TxProtocol c HTTP/1.1 > 33 TxStatus c 503 > 33 TxResponse c Service Unavailable > 33 TxHeader c Server: Varnish > 33 TxHeader c Retry-After: 0 > 33 TxHeader c Content-Type: text/html; charset=utf-8 > 33 TxHeader c Content-Length: 2623 > 33 TxHeader c Date: Tue, 08 Mar 2011 17:08:33 GMT > 33 TxHeader c X-Varnish: 657185708 > 33 TxHeader c Age: 3 > 33 TxHeader c Via: 1.1 varnish > 33 TxHeader c Connection: close > 33 Length c 2623 > 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 > 0.000037670 2.887368441 0.000037193 > 33 SessionClose c error > 33 StatSess c a.b.c.d 50044 3 1 1 0 1 0 235 2623 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From dhelkowski at sbgnet.com Wed Mar 9 00:09:59 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 18:09:59 -0500 (EST) Subject: Lots of configs In-Reply-To: <7141.1299623778@critter.freebsd.dk> Message-ID: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Please refrain from continuing to message the list on this topic. I will not do so either, provided you stop sending things like 'David is wrong, and his ideas should never be considered' to the list. It is entirely childish, and I am sure people are sick of seeing this sort of garbage in the list. My only response to this latest attack is that Varnish is open source software. I can and will publish a how-to on using hashing in the manner that I have described. There is nothing that you can do to stop it, and I am sure people will take advantage of it. ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 5:36:18 PM Subject: Re: Lots of configs In message <4D7688A1.8020202 at sbgnet.com>, David Helkowski writes: >On 3/8/2011 9:53 AM, Poul-Henning Kamp wrote: >> In message, Jona >My anger was due primarily to the statement "if you got to deploy a >whole bunch of scary inline C that will seriously intimidate the >summer intern and makes all the other fear the config it's just not >worth it." > >The contents of that private email essentially boiled to me saying, in >many more words: >"not everyone is as stupid as you". Well, to put it plainly and simply: In the context of the Varnish project, viewed through the prism that is our project philosphy, Per is Right and you are Wrong. Inline C is the last resort, it is there because there needs to be a last resort, but optimizations like the one you propose does not belong *anywhere* inline C or not. End of story. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Mar 9 00:45:57 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Mar 2011 23:45:57 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 18:09:59 EST." <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <7556.1299627957@critter.freebsd.dk> In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >Please refrain from continuing to message the list on this topic. I prefer the archives show the full exchange, should any of your future potential employers google your name. If you do not like that, then you should think carefully about what you post in public. >My only response to this latest attack is that Varnish is open >source software. I can and will publish a how-to on using hashing >in the manner that I have described. There is nothing that you can >do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jhalfmoon at milksnot.com Wed Mar 9 00:50:45 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Wed, 09 Mar 2011 00:50:45 +0100 Subject: Lots of configs In-Reply-To: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> Message-ID: <4D76C0D5.9030809@milksnot.com> On 03/09/2011 12:09 AM, David Helkowski wrote: > Please refrain from continuing to message the list on this topic. > I will not do so either, provided you stop sending things like Are you proposing some kind of 'hushing' algorithm? > 'David is wrong, and his ideas should never be considered' to the list. > It is entirely childish, and I am sure people are sick of seeing this > sort of garbage in the list. > > My only response to this latest attack is that Varnish is open > source software. I can and will publish a how-to on using hashing > in the manner that I have described. There is nothing that you can > do to stop it, and I am sure people will take advantage of it. > From geoff at uplex.de Wed Mar 9 00:58:18 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 09 Mar 2011 00:58:18 +0100 Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> References: <635787275.835680.1299625799368.JavaMail.root@mail-01.sbgnet.com> <4D76C0D5.9030809@milksnot.com> Message-ID: <4D76C29A.1030502@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 3/9/11 12:50 AM, Johnny Halfmoon wrote: > > Are you proposing some kind of 'hushing' algorithm? Some threads fail silently, whereas other fail verbosely. - -- UPLEX Systemoptimierung Schwanenwik 24 22087 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNdsKZAAoJEOUwvh9pJNUR6UUQAKICKY6d76uO3NIDOtoQm7y2 AjqhL9M12lewkF3zWTfBnZjiiOSnifIXcdgBIipOE/VgkUIdxTO5PprIB9Zw/IYF YoDlHyqZxpdZvSFFeMaxR/hG08RQCaT3bQ7DQaX6XEM7hO5dYaYNY7Se9SPfQoIJ sOn/W/+UtQMZokhc1onXWp59ePIgZAUulqzdtDMmTBt51RXnyDLwvgiYAwOeCpUs t1/BW6tZ+Oc6F5MvtcLdN2z/8xYEcwyFgNCh1xaqHoytu/6VPmIWEubl3ATStMM1 BDf6Qa3CUCoDiWqEhb6iU3jCMVhVQRYfKku5uXL9kreV+Ilki6egTpVy8T9Q4AfI 2VZJuriQnsLWJn5gU8Ue2Ax1t3Pi5VKD/EOD3OdTLzfLGb53AtVHtj7QsI2EOqpr /KYnbylVfVv15luhm9NFyHF6yt3yJ2Ox8LqXu4RGCJ9iKwAdjOmHpNi75yNadRj1 nzoxlMBPt+56+8yfjpbfndFY7GdBeW8H7sOCl4L9fTjwo087mGjEZQgertpMpujs c/1BvxOFvpzUVFCbYzEYFXaKz1o+pVzONev03S4praOyUMjRcuWaGU9anIO0w5cO ue8kY21o5lYPpkpmYUud+X1oECnMkHToOUmqDh6avno14vB/IrvaRqDWEn/VtsYh cMNzqqwbBL4hi8FHS6J+ =bbhj -----END PGP SIGNATURE----- From drew.smathers at gmail.com Wed Mar 9 01:23:19 2011 From: drew.smathers at gmail.com (Drew Smathers) Date: Tue, 8 Mar 2011 19:23:19 -0500 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 3:51 PM, Per Buer wrote: > Hi Drew, list. > On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers > wrote: >> >> Sorry to bump my own thread, but does anyone know of a way to set >> saintmode if a backend is down, vs. up and misbehaving (returning 500, >> etc)? >> >> Also, I added a backend probe and this indeed caused grace to kick in >> once the probe determined the backend as sick.I think the docs should >> be clarified if this isn't a bug (grace not working without probe): >> >> http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers > > Check out the trunk version of the docs. Committed some?earlier?today. > Thanks, I see a lot is getting >> >> Finally it's somewhat disconcerting that in the interim between a >> cache expiry and before varnish determines a backend as down (sick) it >> will 503 - so this could affect many clients during that window. >> Ideally, I'd like to successfully service requests if there's an >> object in the cache - period - but I guess this isn't possible now >> with varnish? > > Actually it is. In the docs there is a somewhat dirty trick where set a > marker in vcl_error, restart and pick up on the error and switch backend to > one that is permanetly down. Grace kicks in and serves the stale content. > Sometime post 3.0 there will be a refactoring of the whole vcl_error > handling and we'll end up with something a bit more elegant. > Well a dirty trick is good enough if makes a paying customer for me. :P This is working perfectly now. I would suggest giving an example of "magic marker" mentioned in the document which mentions the trick (http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html). Here's a stripped down version of my VCL incorporating the trick: backend webapp { .host = "127.0.0.1"; .port = "8000"; .probe = { .url = "/hello/"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } /* A backend that will always fail. */ backend failapp { .host = "127.0.0.1"; .port = "9000"; .probe = { .url = "/hello/"; .interval = 12h; .timeout = 1s; .window = 1; .threshold = 1; } } sub vcl_recv { if (req.http.X-Varnish-Error == "1") { set req.backend = failapp; unset req.http.X-Varnish-Error; } else { set req.backend = webapp; } if (! req.backend.healthy) { set req.grace = 24h; } else { set req.grace = 1m; } } sub vcl_error { if ( req.http.X-Varnish-Error != "1" ) { set req.http.X-Varnish-Error = "1"; return (restart); } } sub vcl_fetch { set beresp.grace = 24h; } From dhelkowski at sbgnet.com Wed Mar 9 01:48:30 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:48:30 -0500 (EST) Subject: Lots of configs In-Reply-To: <7556.1299627957@critter.freebsd.dk> Message-ID: <978396342.836068.1299631710854.JavaMail.root@mail-01.sbgnet.com> ----- Original Message ----- From: "Poul-Henning Kamp" To: "David Helkowski" Cc: "Jonathan DeMello" , varnish-misc at varnish-cache.org, "Per Buer" Sent: Tuesday, March 8, 2011 6:45:57 PM Subject: Re: Lots of configs In message <635787275.835680.1299625799368.JavaMail.root at mail-01.sbgnet.com>, D avid Helkowski writes: >>Please refrain from continuing to message the list on this topic. >I prefer the archives show the full exchange, should any of your >future potential employers google your name. Once again; this is pretty rude. My point is not to waste people's energy reading this, not to attempt to hide anything. At a previous point in my past; I had my entire diary posted on the internet; over 1 million words. You won't find that I am the sort of person's who attempt to hide anything. >If you do not like that, then you should think carefully about >what you post in public. I agree with that, but I think that you are responsible for how you treat or abuse others in public. If you are in a position of authority and knowledge you should treat those beneath you well; not mock them. >>My only response to this latest attack is that Varnish is open >>source software. I can and will publish a how-to on using hashing >>in the manner that I have described. There is nothing that you can >>do to stop it, and I am sure people will take advantage of it. Have fun :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dhelkowski at sbgnet.com Wed Mar 9 01:51:01 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Tue, 8 Mar 2011 19:51:01 -0500 (EST) Subject: Lots of configs In-Reply-To: <4D76C0D5.9030809@milksnot.com> Message-ID: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> ---- Original Message ----- From: "Johnny Halfmoon" To: "David Helkowski" Cc: "Poul-Henning Kamp" , varnish-misc at varnish-cache.org, "Jonathan DeMello" Sent: Tuesday, March 8, 2011 6:50:45 PM Subject: Re: Lots of configs On 03/09/2011 12:09 AM, David Helkowski wrote: >> Please refrain from continuing to message the list on this topic. >> I will not do so either, provided you stop sending things like >Are you proposing some kind of 'hushing' algorithm? I posted this request at the suggestion of a 3rd party; because I did not wish to waste people's time. Seeing as PHK is essentially the authority and controller of the list; I am going to continue responding as appropriate unless I am directed not to by PHK. >> 'David is wrong, and his ideas should never be considered' to the list. >> It is entirely childish, and I am sure people are sick of seeing this >> sort of garbage in the list. >> >> My only response to this latest attack is that Varnish is open >> source software. I can and will publish a how-to on using hashing >> in the manner that I have described. There is nothing that you can >> do to stop it, and I am sure people will take advantage of it. > From straightflush at gmail.com Wed Mar 9 03:06:30 2011 From: straightflush at gmail.com (AD) Date: Tue, 8 Mar 2011 21:06:30 -0500 Subject: Lots of configs In-Reply-To: <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> References: <4D76C0D5.9030809@milksnot.com> <1418857584.836074.1299631861440.JavaMail.root@mail-01.sbgnet.com> Message-ID: As the OP, i would like to get the discussion on this thread back to something useful. That being said... Assuming there was an O(1) (or some ideal) mechanism to lookup req.host and map it to a custom function, i notice that i get the error "Unused function custom_host" if there is not an explicit call in the VCL. Aside from having a dummy subroutine that listed all the "calls", is there a cleaner way to deal with this? I am also going to take a stab at making this a module, i already did this with an md5 function so I think that will solve the "pre-loading" problem. Adam On Tue, Mar 8, 2011 at 7:51 PM, David Helkowski wrote: > ---- Original Message ----- > From: "Johnny Halfmoon" > To: "David Helkowski" > Cc: "Poul-Henning Kamp" , > varnish-misc at varnish-cache.org, "Jonathan DeMello" < > demello.itp at googlemail.com> > Sent: Tuesday, March 8, 2011 6:50:45 PM > Subject: Re: Lots of configs > > > On 03/09/2011 12:09 AM, David Helkowski wrote: > >> Please refrain from continuing to message the list on this topic. > >> I will not do so either, provided you stop sending things like > > > >Are you proposing some kind of 'hushing' algorithm? > > I posted this request at the suggestion of a 3rd party; because I did > not wish to waste people's time. Seeing as PHK is essentially the authority > and controller of the list; I am going to continue responding as > appropriate > unless I am directed not to by PHK. > > >> 'David is wrong, and his ideas should never be considered' to the list. > >> It is entirely childish, and I am sure people are sick of seeing this > >> sort of garbage in the list. > >> > >> My only response to this latest attack is that Varnish is open > >> source software. I can and will publish a how-to on using hashing > >> in the manner that I have described. There is nothing that you can > >> do to stop it, and I am sure people will take advantage of it. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Mar 9 09:18:52 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 08:18:52 +0000 Subject: Lots of configs In-Reply-To: Your message of "Tue, 08 Mar 2011 21:06:30 EST." Message-ID: <9583.1299658732@critter.freebsd.dk> In message , AD w rites: >As the OP, i would like to get the discussion on this thread back to >something useful. That being said... Arthur and I brainstormed this issue on our way to cake after VUG3 and a couple of ideas came up which may be worth looking at. At the top-heavy end, is having VCL files tell which domains they apply to, possibly something like: domains { "*.example.com"; ! "notthisone.example.com"; "*.example.biz"; } There are a large number of "what happens if I then do..." questions that needs answered sensibly to make that work, but I think it is doable and worthwhile. The next one we talked about is letting backend declarations declare which domains they apply to, pretty much same syntax as above, now just inside a backend. This would modify the current default backend selection and nothing more. There needs to be some kind of "matched no backend" handling. And finally, since most users with massive domains will need or want to reload VCL for trivial addition/removals of domains, somebody[TM] should probably write a VMOD which looks a domain up in a suitable database file (db/dbm/whatever) There are many ways we can mould and modify these ideas, and I invite you to hash out which way you would prefer it work... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From david at firechaser.com Wed Mar 9 09:41:14 2011 From: david at firechaser.com (David Murphy) Date: Wed, 9 Mar 2011 09:41:14 +0100 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: >And finally, since most users with massive domains will need or want >to reload VCL for trivial addition/removals of domains, somebody[TM] >should probably write a VMOD which looks a domain up in a suitable >database file (db/dbm/whatever) I was wondering, is there any way for us to be able to run an external lookup that can form part of decision-making in VCL. For example, a file or db lookup to see if a value is true/false and that will determine which sections of VCL code run? A real-world example is where we have a waiting room feature that aims to limit traffic reaching a payment portal. When the waiting room is on we'd like Varnish to hold onto the traffic. When turned off we would then forward the requests hitting VCL to the payment system. Currently we are doing this in the backend with a PHP / MySQL lookup and it works, but it's far from ideal. Perhaps a better way would be to pass in the true/false value as a command line arg to Varnish as a 'reload' rather than restart (similar to Apache, I guess) so we don't lose connections. Would also mean that no lookups are required per request. The waiting room state changes on/off only a few times a day. Not sure if this is possible or even desirable but would appreciate your thoughts/suggestions. Thanks, David From phk at phk.freebsd.dk Wed Mar 9 10:01:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Mar 2011 09:01:08 +0000 Subject: Lots of configs In-Reply-To: Your message of "Wed, 09 Mar 2011 09:41:14 +0100." Message-ID: <9904.1299661268@critter.freebsd.dk> In message , Davi d Murphy writes: >>And finally, since most users with massive domains will need or want >>to reload VCL for trivial addition/removals of domains, somebody[TM] >>should probably write a VMOD which looks a domain up in a suitable >>database file (db/dbm/whatever) > >I was wondering, is there any way for us to be able to run an external >lookup that can form part of decision-making in VCL. For example, a >file or db lookup to see if a value is true/false and that will >determine which sections of VCL code run? Writing a VMOD that does that shouldn't be hard, we just need to find somebody with a pocket full of round tuits. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From paul.lu81 at gmail.com Tue Mar 8 18:50:44 2011 From: paul.lu81 at gmail.com (Paul Lu) Date: Tue, 8 Mar 2011 09:50:44 -0800 Subject: A lot of if statements to handle hostnames In-Reply-To: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: Primarily just to make the code cleaner and a little concerned if I have a lot of hostnames. 100 for example. Having to potentially traverse several if statements for each request seems inefficient to me. Thank you, Paul On Mon, Mar 7, 2011 at 11:32 PM, Indranil Chakravorty < indranilc at rediff-inc.com> wrote: > Apart from improving the construct to if ... elseif , could you please tell > me the reason why you are looking for a different way? Is it only for ease > of writing less statements or is there some other reason you foresee? I am > asking because we also have a number of similar construct in our vcl. > Thanks. > > Thanks, > Neel > > On Tue, 08 Mar 2011 12:31:11 +0530 Paul Lu wrote > > >Hi, > > > >I have to work with a lot of domain names in my varnish config and I was > wondering if there is an easier to way to match the hostname other than a > series of if statements. Is there anything like a hash? Or does anybody have > any C code to do this? > > > >example pseudo code: > >================================= > >vcl_recv(){ > > > > if(req.http.host == "www.domain1.com") > > { > > set req.backend = www_domain1_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain2.com") > > { > > set req.backend = www_domain2_com; > > # more code > > return(lookup); > > } > > if(req.http.host == "www.domain3.com") > > { > > set req.backend = www_domain3_com; > > # more code > > return(lookup); > > } > >} > >================================= > > > >Thank you, > >Paul > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 9 10:13:18 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 9 Mar 2011 10:13:18 +0100 Subject: A lot of if statements to handle hostnames In-Reply-To: References: <1299567671.S.7147.H.WVBhdWwgTHUAQSBsb3Qgb2YgaWYgc3RhdGVtZW50cyB0byBoYW5kbGUgaG9zdG5hbWVz.57664.pro-237-175.old.1299569572.19135@webmail.rediffmail.com> Message-ID: On Tue, Mar 8, 2011 at 6:50 PM, Paul Lu wrote: > Primarily just to make the code cleaner and a little concerned if I have a > lot of hostnames. 100 for example. Having to potentially traverse several > if statements for each request seems inefficient to me. > Don't worry about it. I think we've clearly established that it isn't (in a parallel thread). -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Wed Mar 9 13:05:31 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Wed, 9 Mar 2011 12:05:31 +0000 Subject: Lots of configs In-Reply-To: <9583.1299658732@critter.freebsd.dk> References: <9583.1299658732@critter.freebsd.dk> Message-ID: <69A34F57-3B91-41B0-8DD8-49191E80E268@gmail.com> On 9 Mar 2011, at 08:18, Poul-Henning Kamp wrote: > In message , AD w > rites: > >> As the OP, i would like to get the discussion on this thread back to >> something useful. That being said... > > Arthur and I brainstormed this issue on our way to cake after VUG3 > and a couple of ideas came up which may be worth looking at. > > At the top-heavy end, is having VCL files tell which domains they > apply to, possibly something like: > > domains { > "*.example.com"; > ! "notthisone.example.com"; > "*.example.biz"; > } I was chatting to a Varnish administrator at a PHP conference in London a couple of weeks ago. They run Varnish for a very high profile site which has lots of sub-sites that have delegated web teams. So, for example, all traffic to www.example.com hits Varnish, and www.example.com/alpha is managed by a completely separate team to www.example.com/beta. Thanks to Varnish, each base path can be routed to different backends. However, the varnish behaviour itself is different for different paths. My understanding is that each team submits their VCL to the central administrator who sticks it together, and that each path/site has a separate set of vcl_* functions. Whilst I obviously don't know exactly how they're doing this, I think that this different level of behaviour splitting would be worth considering as part of these discussions. So, perhaps it might make sense to have individual VCL files that declare what they're interested in, such as: ==alpha.vcl== appliesto { "alpha"; "alpha.example.com"; "alpha.example.co.uk"; } sub vcl_recv { set req.backend = alphapool; } == and then in the main VCL, do something pseudo-code like: ==default.vcl== include "alpha.vcl" sub vcl_recv { if (req.http.host == "www.example.com") { /* Do some regex to find the first part of the path, and see if there's a valid config for it */ callconfig(reg_sub(req.url, "/(.*)(/.*)?", "\1")); } else { /* Try to see if there's a match for this hostname */ callconfig(req.http.host); } /* By this point, nothing has matched, so call some default behaviour */ callconfig("default"); } == So, callconfig effectively works a bit like the current 'return' statement, but only if a config that 'appliesto' the defined string is found in a config - once the config is called, no further code in the calling function is executed. By detaching this behaviour from the concept of a "domain" in PHK's example, then this pattern could be used for a wider range of scenarios - perhaps switching based on the requestor's IP / ACL matches or whatever else Varnish users might need. Rob From scaunter at topscms.com Wed Mar 9 16:11:05 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 9 Mar 2011 10:11:05 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <8371073562863633333@unknownmsgid> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> I don't think pass or pipe is the issue. 503 means the backend can't answer, and calling pipe won't change that. Here's an example. Set up a "patient" back end; you can collect your back ends into a patient director. backend waitalongtime { .host = "a.b.c.d"; .port = "80"; .first_byte_timeout = 60s; .between_bytes_timeout = 10s; .probe = { .url = "/areyouthere/"; .timeout = 10s; .interval = 15s; .window = 5; .threshold = 1; } } Check the number of restarts before you select a back end. Try your normal, fast director first. if (req.restarts == 0) { set req.backend = fast; } else if (req.restarts == 1) { set req.backend = waitalongtime; } else if (req.restarts == 2) { set req.backend = waitalongtime; } else { set req.backend = waitalongtime; } If you get a 503, catch it in error, and increment restart. This will select the slow back end. sub vcl_error { if (obj.status == 503 && req.restarts < 4) { restart; } } Stefan Caunter Operations Torstar Digital m: (416) 561-4871 -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan Mullally Sent: March-08-11 5:32 PM To: Stewart Robinson Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish 503ing on ~1/100 POSTs On Tue, 8 Mar 2011, Stewart Robinson wrote: > Whilst this may not be a fix to a possible bug in varnish have you tried > switching posts to pipe instead of pass? This might well help, but I'd have no way of knowing for sure. The backend servers indicate the requests via varnish are processed correctly. I'm not able to reproduce the problem at will so I'd be relying on user feedback to determine if the problem still occurs and that's unreliable at best. It is of course better than having the problem occur, but I'd rather take the opportunity to try and get to the bottom of it while I can. I only deployed varnish a couple of days ago. The site will be fairly quiet until the end of the week. I'll resort to pipe if I've not got a fix by then. -Ronan _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ronan at iol.ie Wed Mar 9 16:38:28 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 9 Mar 2011 15:38:28 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> References: <8371073562863633333@unknownmsgid> <7F0AA702B8A85A4A967C4C8EBAD6902C01057E1B@TMG-EVS02.torstar.net> Message-ID: Hi Stephan, On Wed, 9 Mar 2011, Caunter, Stefan wrote: > I don't think pass or pipe is the issue. 503 means the backend can't > answer, and calling pipe won't change that. > Set up a "patient" back end; you can collect your back ends into a > patient director. Ah, the penny drops. I was thinking of "patient" in the context of health checks (ie a sick backend). I'll give it a go, but my gut feeling is that the backends aren't at fault. I'm seeing this error when they are both backends lightly loaded (load average around 1 on an 8 core box), and the rate of incidence does not appear to be related to the load - I actually saw a slightly lower rate (under 1%) last night when utilisation was higher, and as I said previously when I used pound instead of varnish this wasn't a problem. I'll try the patient backend and keep a close eye on the error rate vs utilisation over the next few days. Thanks for your help. -Ronan > -----Original Message----- > >From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronan > Mullally > Sent: March-08-11 5:32 PM > To: Stewart Robinson > Cc: varnish-misc at varnish-cache.org > Subject: Re: Varnish 503ing on ~1/100 POSTs > > On Tue, 8 Mar 2011, Stewart Robinson wrote: > > > Whilst this may not be a fix to a possible bug in varnish have you > tried > > switching posts to pipe instead of pass? > > This might well help, but I'd have no way of knowing for sure. The > backend servers indicate the requests via varnish are processed > correctly. > I'm not able to reproduce the problem at will so I'd be relying on user > feedback to determine if the problem still occurs and that's unreliable > at > best. > > It is of course better than having the problem occur, but I'd rather > take > the opportunity to try and get to the bottom of it while I can. I only > deployed varnish a couple of days ago. The site will be fairly quiet > until the end of the week. I'll resort to pipe if I've not got a fix by > then. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From nadahalli at gmail.com Wed Mar 9 23:46:58 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 9 Mar 2011 17:46:58 -0500 Subject: Duplicate Purges / Purge.Length size reduction Message-ID: Hello All. I have a few questions on the length of the purge.list. 1 - Is it something to be worried about? What's the optimal n_struct_object to n_active_purges ratio? 2 - If I have periodic purge adds that are adding the same URL pattern to be purged, does varnish do any internal optimization? 3 - Is it better to have a ban lurker in place to keep the purge.list length under check? -T -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Thu Mar 10 08:08:07 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Thu, 10 Mar 2011 08:08:07 +0100 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: (Ronan Mullally's message of "Tue, 8 Mar 2011 20:38:08 +0000 (GMT)") References: Message-ID: <871v2fwizs.fsf@qurzaw.varnish-software.com> ]] Ronan Mullally | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. | The current load is light (up to a couple of thousand active sessions, | peak is around five thousand). Varnish has a fairly simple config with | a director consisting of two Apache backends: This looks a bit odd: | backend backend1 { | .host = "1.2.3.4"; | .port = "80"; | .connect_timeout = 5s; | .first_byte_timeout = 90s; | .between_bytes_timeout = 90s; | A typical request is below. The first attempt fails with: | | 33 FetchError c http first read error: -1 0 (Success) This just means the backend closed the connection on us. | there is presumably a restart and the second attempt (sometimes to | backend1, sometimes backend2) fails with: | | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) This is a timeout, however: | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 That 2.89s backend response time doesn't add up with your timeouts. Can you see if you can get a tcpdump of what's going on? Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From ronan at iol.ie Thu Mar 10 14:29:23 2011 From: ronan at iol.ie (Ronan Mullally) Date: Thu, 10 Mar 2011 13:29:23 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Hej Tollef, On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? I'll see what I can do. Varnish is serving an average of about 20 objects per second so there'll be a lot of data to gather / sift through. The following numbers might prove useful - they're counts of the number of successful GETs, POSTs and 503s since 17:00 yesterday. GET POST Hour 200 503 200 503 ------------------------------------------ 17:00 72885 0 (0.00%) 841 0 (0.00%) 18:00 69266 0 (0.00%) 858 6 (0.70%) 19:00 65030 0 (0.00%) 866 3 (0.35%) 20:00 70289 0 (0.00%) 975 8 (0.82%) 21:00 105767 0 (0.00%) 1214 5 (0.41%) 22:00 86236 0 (0.00%) 834 3 (0.36%) 23:00 67078 0 (0.00%) 893 2 (0.22%) 00:00 48042 0 (0.00%) 669 4 (0.60%) 01:00 35966 0 (0.00%) 479 0 (0.00%) 02:00 29598 0 (0.00%) 395 3 (0.76%) 03:00 25819 0 (0.00%) 359 0 (0.00%) 04:00 22835 0 (0.00%) 366 4 (1.09%) 05:00 24487 0 (0.00%) 315 1 (0.32%) 06:00 26583 0 (0.00%) 353 4 (1.13%) 07:00 30433 0 (0.00%) 398 2 (0.50%) 08:00 37394 0 (0.00%) 363 9 (2.48%) 09:00 44462 1 (0.00%) 526 4 (0.76%) 10:00 49891 2 (0.00%) 611 4 (0.65%) 11:00 54826 1 (0.00%) 599 7 (1.17%) 12:00 60765 6 (0.01%) 615 1 (0.16%) 13:00 18941 0 (0.00%) 190 0 (0.00%) Apart from a handful of 503s to GET requests this morning (which I've not had a chance to investigate) the problem almost exclusively affects POSTs. The frequency of the problem does not appear to be related to the load - the highest incidence does not match the busiest periods. I'll get back to you when I have a few packet traces. It will most likely be next week. FWIW, I forgot to mention in my previous posts, I'm running 2.1.5 on a Debian Lenny VM. -Ronan From allan_wind at lifeintegrity.com Thu Mar 10 16:29:18 2011 From: allan_wind at lifeintegrity.com (Allan Wind) Date: Thu, 10 Mar 2011 15:29:18 +0000 Subject: SSL Message-ID: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Is the current thinking still that SSL support will not be integrated into varnish? I found the post in the archives from last year that speaks of nginx as front-end. Has anyone looked into the other stunnel or pound and could share their experience? I cannot tell from their web site if haproxy added SSL support yet. Here is what the pound web site[1] says about stunnel: stunnel: probably comes closest to my understanding of software design (does one job only and does it very well). However, it lacks the load balancing and HTTP filtering features that I considered necessary. Using stunnel in front of Pound (for HTTPS) would have made sense, except that integrating HTTPS into Pound proved to be so simple that it was not worth the trouble. [1] http://www.apsis.ch/pound/ /Allan -- Allan Wind Life Integrity, LLC From phk at phk.freebsd.dk Thu Mar 10 16:41:00 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 10 Mar 2011 15:41:00 +0000 Subject: SSL In-Reply-To: Your message of "Thu, 10 Mar 2011 15:29:18 GMT." <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <82042.1299771660@critter.freebsd.dk> In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ es: >Is the current thinking still that SSL support will not be >integrated into varnish? Yes, that is current thinking. I can see no advantages that outweigh the disadvantages, and a realistic implementation would not be significantly different from running a separate process to do the job in the first place. http://www.varnish-cache.org/docs/trunk/phk/ssl.html -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From sime at sime.net.au Fri Mar 11 08:23:07 2011 From: sime at sime.net.au (Simon Males) Date: Fri, 11 Mar 2011 18:23:07 +1100 Subject: SSL In-Reply-To: <20110310152918.GJ1675@vent.lifeintegrity.localnet> References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind wrote: > Is the current thinking still that SSL support will not be > integrated into varnish? ?I found the post in the archives from > last year that speaks of nginx as front-end. ?Has anyone looked > into the other stunnel or pound and could share their experience? > I cannot tell from their web site if haproxy added SSL support > yet. Using pound 2.4.3 (a little dated) over here. Works well. I've found pound will throw errors in /var/log a few seconds after a Chrome connection (Connection timed out). Though this isn't reflected on the client side. Hate to crap on pound's parade, but I've also some client side errors, but they are not reproducible on demand. http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 -- Simon Males From michal.taborsky at nrholding.com Fri Mar 11 09:31:00 2011 From: michal.taborsky at nrholding.com (Michal Taborsky) Date: Fri, 11 Mar 2011 09:31:00 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79DDC4.6010606@nrholding.com> Dne 11.3.2011 8:23, Simon Males napsal(a): > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 As far as I know, Chrome uses pre-connect to improve performance. What it does, is it creates immediately more than one TCP/IP connection to the target IP address, because most pages contain images and styles and javascript, and Chrome knows, that it will be downloading these in parallel. So it saves time on handshaking, when the connections are needed later. It will also keep the connections open for quite a long time and maybe pound times out these connections when nothing is happening. This sort of thing can happen with any browser, but I think Chrome is a lot more aggressive than others, so it stands out. -- Michal T?borsk? chief systems architect Netretail Holding, B.V. http://www.nrholding.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.boer at bizztravel.nl Fri Mar 11 08:53:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Fri, 11 Mar 2011 08:53:05 +0100 Subject: SSL In-Reply-To: References: <20110310152918.GJ1675@vent.lifeintegrity.localnet> Message-ID: <4D79D4E1.4060400@bizztravel.nl> We use Pound as well. It works fine. Regards, Martin On 03/11/2011 08:23 AM, Simon Males wrote: > On Fri, Mar 11, 2011 at 2:29 AM, Allan Wind > wrote: >> Is the current thinking still that SSL support will not be >> integrated into varnish? I found the post in the archives from >> last year that speaks of nginx as front-end. Has anyone looked >> into the other stunnel or pound and could share their experience? >> I cannot tell from their web site if haproxy added SSL support >> yet. > Using pound 2.4.3 (a little dated) over here. Works well. > > I've found pound will throw errors in /var/log a few seconds after a > Chrome connection (Connection timed out). Though this isn't reflected > on the client side. > > Hate to crap on pound's parade, but I've also some client side errors, > but they are not reproducible on demand. > > http://www.apsis.ch/pound/pound_list/archive/2010/2010-12/1291594925000 > From lampe at hauke-lampe.de Sun Mar 13 00:31:41 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sun, 13 Mar 2011 00:31:41 +0100 Subject: caching of restarted requests possible? In-Reply-To: <4D6DEE1A.80609@gadu-gadu.pl> References: <1299021227.1879.9.camel@narf900.mobile-vpn.frell.eu.org> <4D6DEE1A.80609@gadu-gadu.pl> Message-ID: <4D7C025D.7060603@hauke-lampe.de> On 02.03.2011 08:13, ?ukasz Barszcz / Gadu-Gadu wrote: > Check out patch attached to ticket > http://varnish-cache.org/trac/ticket/412 which changes behavior to what > you need. Thanks again, it works! I adapted the patch for varnish 2.1.5: http://cfg.openchaos.org/varnish/patches/varnish-2.1.5-cache_restart.diff A working example can be seen here: http://cfg.openchaos.org/varnish/vcl/special/backend_select_updates.vcl Hauke. From checker at d6.com Sun Mar 13 05:28:00 2011 From: checker at d6.com (Chris Hecker) Date: Sat, 12 Mar 2011 20:28:00 -0800 Subject: best way to not cache large files? Message-ID: <4D7C47D0.9050809@d6.com> I have a 400mb file that I just want apache to serve. What's the best way to do this? I can put it in a directory and tell varnish not to cache stuff that matches that dir, but I'd rather just make a general rule that varnish should ignore >=20mb files or whatever. Thanks, Chris From straightflush at gmail.com Sun Mar 13 15:26:54 2011 From: straightflush at gmail.com (AD) Date: Sun, 13 Mar 2011 10:26:54 -0400 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: i dont think you can check the body size (at least it seems that way with the existing req.* objects ). If you know the mime-type of the file you might just be able to pipe the mime type if that works for all file sizes ? I wonder if there is a way to pass the req object into some inline C that can access the body somehow? On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best way > to do this? I can put it in a directory and tell varnish not to cache stuff > that matches that dir, but I'd rather just make a general rule that varnish > should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kacperw at gmail.com Sun Mar 13 17:30:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Sun, 13 Mar 2011 17:30:44 +0100 Subject: VCL BNF Message-ID: Varnish Control Language grammar in BNF notation ================================================ The VCL compiler is a one-step lex-parse-prune-symtable-typecheck-emit compiler. Having looked for it several times myself, and having discussed it with several others the conclusion was that VCL needs a proper grammar. Grammars, as many know, are useful in several circumstances. BNF based on PHK's precedence rules http://www.varnish-cache.org/docs/trunk/phk/vcl_expr.html as well as vcc_Lexer and vcc_Parse from HEAD. For those of us unfamiliar with BNF: http://www.cui.unige.ch/db-research/Enseignement/analyseinfo/AboutBNF.html Note on BNF syntax: As the BNF canon is somewhat unweildy, I've opted for the convention of specifying terminal tokens in lowercase, while non-terminals are denoted in UPPERCASE. Optional statements are the usual [..] and repeated statements are {..}. To improve portability there are quotes around literals as this does not sacrifice readability. As for token and production names, I've tried to stay as true to the source code as possible without sacrificing readability. As an extension to BNF I have included comments, which are lines starting with '#'. I have attempted to comment grammar particular to major versions of Varnish and other notables. I have not backward-checked the grammar, and would appreciate comments on what grammar differences we see in V2.0 and 2.1 as compared to 3.0. There are bound to be bugs. Feedback and comments appreciated. v0.1 .. not yet machine parsable(?)! Nonterminals ------------ VCL ::= ACL | SUB | BACKEND | DIRECTOR | PROBE | IMPORT | CSRC ACL ::= 'acl' identifier '{' {ACLENTRY} '}' SUB ::= 'sub' identifier COMPOUND BACKEND ::= 'backend' identifier '{' { ['set|backend'] BACKENDSPEC } '}' PROBE ::= 'probe' identifier '{' PROBESPEC '}' # VMod imports are new in 3.0 IMPORT ::= 'import' identifier [ 'from' string ] ';' CSRC ::= 'C{' inline-c '}C' # director definitions - simple variant DIRECTOR ::= 'director' dirtype identifier '{' DIRSPEC '}' dirtype ::= 'hash' | 'random' | 'client' | 'round-robin' | 'dns' # can do better: specify production rule for every director type DIRECTOR ::= 'director' ('hash'|'random'|'client')' identifier '{' DIRSPEC '}' 'director' 'round-robin' identifier '{' { '.' BACKENDEF } '}' 'director' 'dns' identifier '{' DNSSPEC '}' DIRSPEC ::= [ '.' 'retries' '=' uintval ';' ] { '{' '.' BACKENDEF [ '.' 'weight' '=' numval ';' ] '}' } DNSSPEC ::= { '.' BACKENDEF } [ '.' 'ttl' '=' timeval ';' ] [ '.' 'suffix' '=' string ';' ] [ '.' DNSLIST ] DNSLIST ::= '{' { iprange ';' [ BACKENDSPEC ] } '}' BACKENDEF ::= 'backend' ( BACKENDSPEC | identifier ';' ) # field spec as used in backend and probe definitions SPEC ::= '{' { '.' identifier = fieldval ';' } '}' # can do better: devil is in the detail on this one BACKENDSPEC ::= '.' 'host' '=' string ';' | '.' 'port' '=' string ';' # wow I had no idea... | '.' 'host_header' '=' string ';' | '.' 'connect_timeout''=' timeval ';' | '.' 'first_byte_timeout' '=' timeval ';' | '.' 'between_bytes_timeout' '=' timeval ';' | '.' 'max_connections '=' uintval ';' | '.' 'saintmode_treshold '=' uintval ';' | '.' 'probe' '{' {PROBESPEC} '}' ';' # another woww \0/ | '.' 'probe' identifier; PROBESPEC ::= '.' 'url' = string ';' | '.' 'request' = string ';' | '.' 'expected_response' = uintval ';' | '.' 'timeout' = timeval ';' | '.' 'interval' = timeval ';' | '.' 'window' = uintval ';' | '.' 'treshold' = uintval ';' | '.' 'initial' = uintval ';' # there is no room in BNF for 'either !(..) or (!..) or !..' (parens optional) ACLENTRY ::= ['!'] ['('] ['!'] iprange [')'] ';' # totally avoids dangling else yarr IFSTMT ::= 'if' CONDITIONAL COMPOUND [ { ('elsif'|'elseif') CONDITIONAL COMPOUND } [ 'else' COMPOUND ]] CONDITIONAL ::= '(' EXPR ')' COMPOUND ::= '{' {STMT} '}' STMT ::= COMPOUND | IFSTMT | CSRC | ACTIONSTMT ';' ACTIONSTMT ::= ACTION | FUNCALL ACTION :== 'error' [ '(' EXPR(int) [ ',' EXPR(string) ] ')' | EXPR(int) [ EXPR(string) ] | 'call' identifier # in vcl_fetch only | 'esi' # in vcl_hash only | 'hash_data' '(' EXPRESSION ')' | 'panic' EXPRESSION # note: purge expressions are semantically special | 'purge' '(' EXPRESSION ')' | 'purge_url' '(' EXPRESSION ')' | 'remove' variable # V2.0: could do actions without return keyword | 'return' '(' ( deliver | error | fetch | hash | lookup | pass | pipe | restart ) ')' # rollback what? | 'rollback' | 'set' variable assoper EXPRESSION | 'synthetic' EXPRESSION | 'unset' variable FUNCALL ::= variable '(' [ { FUNCALL | expr | string-list } ] ')' EXPRESSION ::= 'true' | 'false' | constant | FUNCALL | variable | '(' EXPRESSION ')' | number '*' number | number '/' number # add two strings without operator in 2.x series | duration '*' doubleval | string '+' string | number '+' number | number '-' number | timeval '+' duration | timeval '-' duration | timeval '-' timeval | duration '+' duration | duration '-' duration | EXPRESSION comparison EXPRESSION | '!' EXPRESSION | EXPRESSION '&&' EXPRESSION | EXPRESSION '||' EXPRESSION Terminals: ----------------- timeval ::= doubleval timeunit duration ::= ['-'] timeval doubleval ::= { number [ '.' [number] ] } timeunit ::= 'ms' | 's' | 'm' | 'h' | 'd' | 'w' uintval ::= { number } # unsigned fieldval ::= timeval | doubleval | timeunit | uintval constant ::= string | fieldval iprange ::= string [ '/' number ] variable ::= identifier [ '.' identifier ] comparison ::= '==' | '!=' | '<' | '>' | '<= | '>=' | '~' | '!~' assoper ::= '=' | '+=' | '-=' | '*=' | '/=' | comment ::= /* !(/*|*/)* */ // !(\n)* $ # !(\n)* $ long-string ::= '{"' !("})* '"}' shortstring ::= '"' !(\")* '"' inline-c ::= !(('}C') string ::= shortstring | longstring identifier ::= [a-zA-Z][a-zA-Z0-9_-]* number ::= [0-9]+ Lexer tokens: ----------------- ! % & + * , - . / ; < = > { | } ~ ( ) != NEQ !~ NOMATCH ++ INC += INCR *= MUL -- DEC -= DECR /= DIV << SHL <= LEQ == EQ >= GEQ >> SHR || COR && CAND elseif ELSEIF elsif ELSIF include INCLUDE if IF # include statements omitted as they are pre-processed away, they are not a syntactic device. -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From phk at phk.freebsd.dk Sun Mar 13 17:39:32 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 13 Mar 2011 16:39:32 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:30:44 +0100." Message-ID: <10462.1300034372@critter.freebsd.dk> In message , Kacp er Wysocki writes: >Varnish Control Language grammar in BNF notation >================================================ Not bad! Put it in a wiki page. If you don't have wiki bit, contact me with your trac login, and I'll give you one. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Sun Mar 13 17:49:24 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 17:49:24 +0100 Subject: VCL BNF In-Reply-To: <10462.1300034372@critter.freebsd.dk> References: <10462.1300034372@critter.freebsd.dk> Message-ID: On Sun, Mar 13, 2011 at 5:39 PM, Poul-Henning Kamp wrote: > In message , > Kacp > er Wysocki writes: > > > >Varnish Control Language grammar in BNF notation > >================================================ > > Not bad! > > Put it in a wiki page. If you don't have wiki bit, contact me with > your trac login, and I'll give you one. > Shouldn't we rather keep it in the reference docs? -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Sun Mar 13 22:22:03 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 10:22:03 +1300 (NZDT) Subject: Always sending gzip? Message-ID: Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. Running 2.1.5 from the rpm packages Our config has: # If Accept-Encoding contains "gzip" then make it only include that. If not # then remove header completely. deflate just causes problems # if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|mp4|flv|pdf)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } But: $ curl -v -H "Accept-Encoding: fff" -H "Host: www.xxxx.com" http://yyy/themes/0/scripts/getTime.cfm > /dev/null > GET /themes/0/scripts/getTime.cfm HTTP/1.1 > User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > Accept: */* > Accept-Encoding: fff > Host: www.xxxx.com > < HTTP/1.1 200 OK < Server: Apache < Cache-Control: max-age=300 < X-UA-Compatible: IE=EmulateIE7 < Content-Type: text/javascript < Proxy-Connection: Keep-Alive < Content-Encoding: gzip < Content-Length: 176 < Date: Sun, 13 Mar 2011 21:13:24 GMT < Connection: keep-alive < Cache-Info: Object-Age=228, hits=504, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 SessionOpen c 1.2.3.4 21147 :80 34 ReqStart c 1.2.3.4 21147 248469172 34 RxRequest c GET 34 RxURL c /themes/0/scripts/getTime.cfm 34 RxProtocol c HTTP/1.1 34 RxHeader c User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 34 RxHeader c Accept: */* 34 RxHeader c Accept-Encoding: fff 34 RxHeader c Host: www.xxx.com 34 VCL_call c recv lookup 34 VCL_call c hash hash 34 Hit c 248452316 34 VCL_call c hit deliver 34 VCL_call c deliver deliver 34 TxProtocol c HTTP/1.1 34 TxStatus c 200 34 TxResponse c OK 34 TxHeader c Server: Apache 34 TxHeader c Cache-Control: max-age=300 34 TxHeader c X-UA-Compatible: IE=EmulateIE7 34 TxHeader c Content-Type: text/javascript 34 TxHeader c Proxy-Connection: Keep-Alive 34 TxHeader c Content-Encoding: gzip 34 TxHeader c Content-Length: 176 34 TxHeader c Accept-Ranges: bytes 34 TxHeader c Date: Sun, 13 Mar 2011 21:11:36 GMT 34 TxHeader c Connection: keep-alive 34 TxHeader c Cache-Info: Object-Age=120, hits=243, Cache-Host=yyy, Backend-Host=apn121, healthy=yes 34 Length c 176 34 ReqEnd c 248469172 1300050696.585048914 1300050696.585428953 0.000026941 0.000339031 0.000041008 -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From perbu at varnish-software.com Sun Mar 13 22:37:59 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 13 Mar 2011 22:37:59 +0100 Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even > when we don't ask for it. > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it and Varnish will do the right thing. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Dodd at comicrelief.com Sun Mar 13 22:51:24 2011 From: B.Dodd at comicrelief.com (Ben Dodd) Date: Sun, 13 Mar 2011 21:51:24 +0000 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Did anyone manage to find a workable solution for this? On 10 Mar 2011, at 13:29, Ronan Mullally wrote: > Hej Tollef, > > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > >> | 33 FetchError c http first read error: -1 0 (Success) >> >> This just means the backend closed the connection on us. >> >> | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) >> >> This is a timeout, however: >> >> | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 >> >> That 2.89s backend response time doesn't add up with your timeouts. Can >> you see if you can get a tcpdump of what's going on? > > I'll see what I can do. Varnish is serving an average of about 20 objects > per second so there'll be a lot of data to gather / sift through. > > The following numbers might prove useful - they're counts of the number of > successful GETs, POSTs and 503s since 17:00 yesterday. > > GET POST > Hour 200 503 200 503 > ------------------------------------------ > 17:00 72885 0 (0.00%) 841 0 (0.00%) > 18:00 69266 0 (0.00%) 858 6 (0.70%) > 19:00 65030 0 (0.00%) 866 3 (0.35%) > 20:00 70289 0 (0.00%) 975 8 (0.82%) > 21:00 105767 0 (0.00%) 1214 5 (0.41%) > 22:00 86236 0 (0.00%) 834 3 (0.36%) > 23:00 67078 0 (0.00%) 893 2 (0.22%) > 00:00 48042 0 (0.00%) 669 4 (0.60%) > 01:00 35966 0 (0.00%) 479 0 (0.00%) > 02:00 29598 0 (0.00%) 395 3 (0.76%) > 03:00 25819 0 (0.00%) 359 0 (0.00%) > 04:00 22835 0 (0.00%) 366 4 (1.09%) > 05:00 24487 0 (0.00%) 315 1 (0.32%) > 06:00 26583 0 (0.00%) 353 4 (1.13%) > 07:00 30433 0 (0.00%) 398 2 (0.50%) > 08:00 37394 0 (0.00%) 363 9 (2.48%) > 09:00 44462 1 (0.00%) 526 4 (0.76%) > 10:00 49891 2 (0.00%) 611 4 (0.65%) > 11:00 54826 1 (0.00%) 599 7 (1.17%) > 12:00 60765 6 (0.01%) 615 1 (0.16%) > 13:00 18941 0 (0.00%) 190 0 (0.00%) > > Apart from a handful of 503s to GET requests this morning (which I've not > had a chance to investigate) the problem almost exclusively affects POSTs. > The frequency of the problem does not appear to be related to the load - > the highest incidence does not match the busiest periods. > > I'll get back to you when I have a few packet traces. It will most likely > be next week. FWIW, I forgot to mention in my previous posts, I'm running > 2.1.5 on a Debian Lenny VM. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit http://www.messagelabs.com/email > ______________________________________________________________________ Comic Relief 1st Floor 89 Albert Embankment London SE1 7TP Tel: 020 7820 2000 Fax: 020 7820 2222 red at comicrelief.com www.comicrelief.com Comic Relief is the operating name of Charity Projects, a company limited by guarantee and registered in England no. 1806414; registered charity 326568 (England & Wales) and SC039730 (Scotland). Comic Relief Ltd is a subsidiary of Charity Projects and registered in England no. 1967154. Registered offices: Hanover House, 14 Hanover Square, London W1S 1HP. VAT no. 773865187. This email (and any attachment) may contain confidential and/or privileged information. If you are not the intended addressee, you must not use, disclose, copy or rely on anything in this email and should contact the sender and delete it immediately. The views of the author are not necessarily those of Comic Relief. We cannot guarantee that this email (and any attachment) is virus free or has not been intercepted and amended, so do not accept liability for any damage resulting from software viruses. You should carry out your own virus checks. From simon at darkmere.gen.nz Mon Mar 14 00:12:50 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 12:12:50 +1300 (NZDT) Subject: Always sending gzip? In-Reply-To: References: Message-ID: On Sun, 13 Mar 2011, Per Buer wrote: > On Sun, Mar 13, 2011 at 10:22 PM, Simon Lyall wrote: > > Getting a weird thing where the server is returning gzip'd content even when we don't ask for it. > > > If your server is not sending "Vary: Accept-Encoding" Varnish won't know that it needs to Vary on the A-E header. Just add it > and Varnish will do the right thing. Of course, it was turning up sometimes but not always. I've changed the backend to force it in and that seems to have fixed the problem (and hopefully another we are seeing). Thankyou -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From simon at darkmere.gen.nz Mon Mar 14 03:36:39 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Mon, 14 Mar 2011 15:36:39 +1300 (NZDT) Subject: Refetch new page according to result? Message-ID: This looks impossible but I thought I'd ask. The idea I had was that the cache could fetch a page and according to the result fetch another page an serve that to the user. So I could look for a 301 and if the 301 pointed to my domain I could refetch the new URL and deliver that content (without giving the user a 301). However going through the docs this appears to be impossible since I won't know the result of the backend call until vcl_fetch or vcl_deliver and neither of these give me the option to go back to vcl_recv This is for archived pages, so the app would check the archive status early in the transaction and just return a quick pointer to the archive url (which might be just flat file on disk) which varnish could grab, serve and cache forever with the user not being redirected. -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From phk at phk.freebsd.dk Mon Mar 14 08:19:38 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:19:38 +0000 Subject: VCL BNF In-Reply-To: Your message of "Sun, 13 Mar 2011 17:49:24 +0100." Message-ID: <22707.1300087178@critter.freebsd.dk> In message , Per Buer writes: >> >> >Varnish Control Language grammar in BNF notation >> >================================================ >> >> Not bad! >> >> Put it in a wiki page. If you don't have wiki bit, contact me with >> your trac login, and I'll give you one. >> > >Shouldn't we rather keep it in the reference docs? Works for me too -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Mon Mar 14 08:29:56 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 14 Mar 2011 08:29:56 +0100 Subject: Refetch new page according to result? In-Reply-To: (Simon Lyall's message of "Mon, 14 Mar 2011 15:36:39 +1300 (NZDT)") References: Message-ID: <87oc5erwgb.fsf@qurzaw.varnish-software.com> ]] Simon Lyall | However going through the docs this appears to be impossible since I | won't know the result of the backend call until vcl_fetch or | vcl_deliver and neither of these give me the option to go back to | vcl_recv You should be able to just change req.url and restart in vcl_fetch. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From schmidt at ze.tum.de Mon Mar 14 08:45:06 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 08:45:06 +0100 Subject: SSL In-Reply-To: <82042.1299771660@critter.freebsd.dk> References: <82042.1299771660@critter.freebsd.dk> Message-ID: <4D7DC782.6050300@ze.tum.de> Am 10.03.2011 16:41, schrieb Poul-Henning Kamp: > In message <20110310152918.GJ1675 at vent.lifeintegrity.localnet>, Allan Wind writ > es: >> Is the current thinking still that SSL support will not be >> integrated into varnish? > > Yes, that is current thinking. I can see no advantages that outweigh > the disadvantages, and a realistic implementation would not be > significantly different from running a separate process to do the > job in the first place. stunnel has the disatwantage that we loose the clientIP information. Intigration of SSL in varnish wouldn't have this problem. with pound thios can be fixen by analysing the forewarded-for header but isn't that elegant. Regards Estartu -- ---------------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de Technische Universit?t M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: +49 89 289-25270 | PGP-PublicKey Fax: +49 89 289-25257 | on request -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From phk at phk.freebsd.dk Mon Mar 14 08:55:40 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 07:55:40 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 08:45:06 +0100." <4D7DC782.6050300@ze.tum.de> Message-ID: <41707.1300089340@critter.freebsd.dk> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >stunnel has the disatwantage that we loose the clientIP information. Doesn't it set a header with this information ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 09:06:28 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 09:06:28 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <4D7DC782.6050300@ze.tum.de> <41707.1300089340@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:55 AM, Poul-Henning Kamp wrote: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > > >stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? > Yes. If we use the patched stunnel version that haproxy also uses. It requires Varnish to understand the protocol however, as the address of the client is sent at the beginning of the conversation in binary form. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 09:14:51 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 08:14:51 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 09:06:28 +0100." Message-ID: <41829.1300090491@critter.freebsd.dk> In message , Per Buer writes: >Yes. If we use the patched stunnel version that haproxy also uses. It >requires Varnish to understand the protocol however, as the address of the >client is sent at the beginning of the conversation in binary form. I would say "Use a more intelligent SSL proxy" then... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From rtshilston at gmail.com Mon Mar 14 09:22:02 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Mon, 14 Mar 2011 08:22:02 +0000 Subject: SSL In-Reply-To: <41829.1300090491@critter.freebsd.dk> References: <41829.1300090491@critter.freebsd.dk> Message-ID: <4A7E853B-F74C-415F-B324-6FEBCDA0D7E5@gmail.com> On 14 Mar 2011, at 08:14, Poul-Henning Kamp wrote: > In message , Per > Buer writes: > >> Yes. If we use the patched stunnel version that haproxy also uses. It >> requires Varnish to understand the protocol however, as the address of the >> client is sent at the beginning of the conversation in binary form. > > I would say "Use a more intelligent SSL proxy" then... We're using Varnish successfully with nginx. The config looks like: ===== worker_processes 1; error_log /var/log/nginx/global-error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { ssl on; ssl_certificate /etc/ssl/example.com.crt; ssl_certificate_key /etc/ssl/example.com.key; listen a.b.c.4 default ssl; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Proxy any requests to the local varnish instance location / { proxy_set_header "Host" $host; proxy_set_header "X-Forwarded-By" "Nginx-a.b.c.4"; proxy_set_header "X-Forwarded-For" $proxy_add_x_forwarded_for; proxy_pass a.b.c.5; } } } ==== From schmidt at ze.tum.de Mon Mar 14 09:34:41 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 09:34:41 +0100 Subject: SSL In-Reply-To: <41707.1300089340@critter.freebsd.dk> References: <41707.1300089340@critter.freebsd.dk> Message-ID: <4D7DD321.7000906@ze.tum.de> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: > In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: > >> stunnel has the disatwantage that we loose the clientIP information. > > Doesn't it set a header with this information ? It's a tunnel. It doesn't change the stream. As I said, we use pound because it sets the header. But its another daemon to run and to setup. Another component that could fail. Integrating SSL in varnish would reduce the complexity. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From kacperw at gmail.com Mon Mar 14 11:16:44 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 11:16:44 +0100 Subject: VCL BNF In-Reply-To: <22707.1300087178@critter.freebsd.dk> References: <22707.1300087178@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 8:19 AM, Poul-Henning Kamp wrote: > In message , Per > Buer writes: >>> >Varnish Control Language grammar in BNF notation >>> >>> Not bad! >>> >>> Put it in a wiki page. ?If you don't have wiki bit, contact me with >>> your trac login, and I'll give you one. >>> >> >>Shouldn't we rather keep it in the reference docs? > > Works for me too The BNF might not be 100% complete yet - there might be bugs - so wiki is appropriate. kwy is my trac login. 0K From phk at phk.freebsd.dk Mon Mar 14 11:23:08 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 10:23:08 +0000 Subject: VCL BNF In-Reply-To: Your message of "Mon, 14 Mar 2011 11:16:44 +0100." Message-ID: <69458.1300098188@critter.freebsd.dk> In message , Kacp er Wysocki writes: >The BNF might not be 100% complete yet - there might be bugs - so wiki >is appropriate. kwy is my trac login. Agreed. You should have wiki bit now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kacperw at gmail.com Mon Mar 14 12:05:27 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:05:27 +0100 Subject: SSL In-Reply-To: <4D7DD321.7000906@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: > Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >> >>> stunnel has the disatwantage that we loose the clientIP information. >> >> Doesn't it set a header with this information ? > > It's a tunnel. It doesn't change the stream. As I said, we use pound because > it sets the header. But its another daemon to run and to setup. Another > component that could fail. Integrating SSL in varnish would reduce the > complexity. What you meant to say is "integrating SSL in Varnish would increase complexity". Putting that component inside varnish doesn't automatically make it infallable. As an added bonus, if SSL is in a separate process it won't bring the whole server down if it fails, if that's the kind of stuff you're worried about. 0K -- http://kacper.doesntexist.org http://windows.dontexist.com Employ no technique to gain supreme enlightment. - Mar pa Chos kyi blos gros From kacperw at gmail.com Mon Mar 14 12:21:03 2011 From: kacperw at gmail.com (Kacper Wysocki) Date: Mon, 14 Mar 2011 12:21:03 +0100 Subject: VCL BNF In-Reply-To: <69458.1300098188@critter.freebsd.dk> References: <69458.1300098188@critter.freebsd.dk> Message-ID: On Mon, Mar 14, 2011 at 11:23 AM, Poul-Henning Kamp wrote: > In message , > Kacper Wysocki writes: > >>The BNF might not be 100% complete yet - there might be bugs - so wiki >>is appropriate. kwy is my trac login. > > Agreed. > > You should have wiki bit now. http://www.varnish-cache.org/trac/wiki/VCL.BNF I put a link under Documentation. 0K From schmidt at ze.tum.de Mon Mar 14 13:00:23 2011 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Mon, 14 Mar 2011 13:00:23 +0100 Subject: SSL In-Reply-To: References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> Message-ID: <4D7E0357.4070204@ze.tum.de> Am 14.03.2011 12:05, schrieb Kacper Wysocki: > On Mon, Mar 14, 2011 at 9:34 AM, Gerhard Schmidt wrote: >> Am 14.03.2011 08:55, schrieb Poul-Henning Kamp: >>> In message <4D7DC782.6050300 at ze.tum.de>, Gerhard Schmidt writes: >>> >>>> stunnel has the disatwantage that we loose the clientIP information. >>> >>> Doesn't it set a header with this information ? >> >> It's a tunnel. It doesn't change the stream. As I said, we use pound because >> it sets the header. But its another daemon to run and to setup. Another >> component that could fail. Integrating SSL in varnish would reduce the >> complexity. > > What you meant to say is "integrating SSL in Varnish would increase > complexity". > Putting that component inside varnish doesn't automatically make it > infallable. As an added bonus, if SSL is in a separate process it > won't bring the whole server down if it fails, if that's the kind of > stuff you're worried about. It does kill your serive if your service is SSL based. Managing more config and more daemons always increses the complexity. More Daemons increse the probabilty of failure and increase the monitioring requirements. More Daemons increase the probailty of security problems. More Daemons increase the amount of time spend keepings the system up to date. It might increase the complexity of varnish but not the system a hole. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | Jabber: estartu at ze.tum.de WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From perbu at varnish-software.com Mon Mar 14 13:10:41 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 13:10:41 +0100 Subject: SSL In-Reply-To: <4D7E0357.4070204@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:00 PM, Gerhard Schmidt wrote: > > It does kill your serive if your service is SSL based. > > Managing more config and more daemons always increses the complexity. > More Daemons increse the probabilty of failure and increase the monitioring > requirements. > More Daemons increase the probailty of security problems. > More Daemons increase the amount of time spend keepings the system up to > date. > First of all. Varnish is probably never getting SSL support built in so you can stop beating that horse. Also, in my opinion, it's easier to have two simple systems than one complex system. Having small dedicated programs is the beautiful design principle of Unix and as long as it won't influence performance I'm sold. IMO this is mostly a packaging issue. If we repackage stunnel as "varnish-ssl" and makes it "just work" it will be dead simple. It does however, put the pressure on us to maintain it, but that is minor. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 14 13:17:59 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Mar 2011 12:17:59 +0000 Subject: SSL In-Reply-To: Your message of "Mon, 14 Mar 2011 13:00:23 +0100." <4D7E0357.4070204@ze.tum.de> Message-ID: <54017.1300105079@critter.freebsd.dk> In message <4D7E0357.4070204 at ze.tum.de>, Gerhard Schmidt writes: >Managing more config and more daemons always increses the complexity. >More Daemons increse the probabilty of failure and increase the monitioring >requirements. >More Daemons increase the probailty of security problems. >More Daemons increase the amount of time spend keepings the system up to date. > >It might increase the complexity of varnish but not the system a hole. I can absolute guarantee you, that there would be no relevant difference in complexity, because the only way we can realistically add SSL to varnish is to start another daemon process to do it. Adding that complexity to Varnish will decrese the overall security relative to having the SSL daemon be an self-contained piece of software, simply as a matter of code complexity. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Mar 14 14:06:15 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 14:06:15 +0100 Subject: SSL In-Reply-To: <4D7E10F9.1040904@ze.tum.de> References: <41707.1300089340@critter.freebsd.dk> <4D7DD321.7000906@ze.tum.de> <4D7E0357.4070204@ze.tum.de> <4D7E10F9.1040904@ze.tum.de> Message-ID: On Mon, Mar 14, 2011 at 1:58 PM, Gerhard Schmidt wrote: > > > Also, in my opinion, it's easier to have two simple systems than one > complex > > system. Having small dedicated programs is the beautiful design principle > of > > Unix and as long as it won't influence performance I'm sold. > > If there was a way to use simple dedicated service without loosing > information > this would be correct. But there isn't a simple daemon to accept ssl > connections for varnish without loosing the Client Information. > You didn't read the whole thread, did you? You obviously don't know about the PROXY protocol mode of the patched stunnel version we're talking about. It requires slight modifications of Varnish and would transmit client.ip initially when talking with Varnish. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.chiswell at mangahigh.com Mon Mar 14 18:02:16 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:02:16 +0000 Subject: VCL Formatting Message-ID: <4D7E4A18.3030701@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From richard.chiswell at mangahigh.com Mon Mar 14 18:04:28 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 14 Mar 2011 17:04:28 +0000 Subject: VCL Formatting Message-ID: <4D7E4A9C.6020907@mangahigh.com> Hi, Does any know of, or have written, a code formatter for Varnish's VCL files which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? Ideally, with full syntax coloring and type hinting - but just something that can understand that VCL format and make sure the indents are "good" will do! Many thanks, Richard Chiswell http://twitter.com/rchiswell From perbu at varnish-software.com Mon Mar 14 18:33:14 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 14 Mar 2011 18:33:14 +0100 Subject: VCL Formatting In-Reply-To: <4D7E4A18.3030701@mangahigh.com> References: <4D7E4A18.3030701@mangahigh.com> Message-ID: Hi, On Mon, Mar 14, 2011 at 6:02 PM, Richard Chiswell < richard.chiswell at mangahigh.com> wrote: > Hi, > > Does any know of, or have written, a code formatter for Varnish's VCL files > which can be used by Netbeans, Eclipse or WebStorm/PHPStorm? > I use c-mode in Emacs - works ok for my somewhat limited needs. There probably is some codematting stuff for C you can use. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Mon Mar 14 23:30:14 2011 From: checker at d6.com (Chris Hecker) Date: Mon, 14 Mar 2011 15:30:14 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7E96F6.4060707@d6.com> Anybody have any ideas? They're not all the same mime type, so I think putting them in an uncached dir is better if there's no way to figure it out in vcl. Chris On 2011/03/13 07:26, AD wrote: > i dont think you can check the body size (at least it seems that way > with the existing req.* objects ). If you know the mime-type of the > file you might just be able to pipe the mime type if that works for all > file sizes ? > > I wonder if there is a way to pass the req object into some inline C > that can access the body somehow? > > On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > wrote: > > > I have a 400mb file that I just want apache to serve. What's the > best way to do this? I can put it in a directory and tell varnish > not to cache stuff that matches that dir, but I'd rather just make a > general rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From moseleymark at gmail.com Mon Mar 14 23:51:19 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 14 Mar 2011 15:51:19 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7E96F6.4060707@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > Anybody have any ideas? ?They're not all the same mime type, so I think > putting them in an uncached dir is better if there's no way to figure it out > in vcl. > > Chris > > > > On 2011/03/13 07:26, AD wrote: >> >> i dont think you can check the body size (at least it seems that way >> with the existing req.* objects ). ?If you know the mime-type of the >> file you might just be able to pipe the mime type if that works for all >> file sizes ? >> >> I wonder if there is a way to pass the req object into some inline C >> that can access the body somehow? >> >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker > > wrote: >> >> >> ? ?I have a 400mb file that I just want apache to serve. ?What's the >> ? ?best way to do this? ?I can put it in a directory and tell varnish >> ? ?not to cache stuff that matches that dir, but I'd rather just make a >> ? ?general rule that varnish should ignore >=20mb files or whatever. >> >> ? ?Thanks, >> ? ?Chris I was asking about the same thing in this thread: http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 Check out Tollef's suggestion towards the end. That's what I've been using. The one drawback is that it's still fetched by varnish *completely* in the first, not-yet-restarted request, which means that a) you're fetching it twice; and b) it'll still stored albeit momentarily, so it'll evict stuff if there's not enough room. Before that, I wasn't sending any reqs for anything matching stuff like .avi or .wmv to varnish (from an nginx frontend). It'd be kind of neat if you could do a call-out and for anything matching a likely large file (i.e. has extension matching .avi, .wmv, etc), and do a HEAD request to determine the response size (or whatever you wanted to look for) before doing the GET. From straightflush at gmail.com Tue Mar 15 02:48:08 2011 From: straightflush at gmail.com (AD) Date: Mon, 14 Mar 2011 21:48:08 -0400 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7E96F6.4060707@d6.com> Message-ID: whats interesting is the last comment All this happens over localhost, so it's quite fast, but in the | interest of efficiency, is there something I can set or call so that | it closes that first connection almost immediately? Having to refetch | a 800meg file off of NFS might hurt -- even if a good chunk of it is | still in the OS block cache. You'd need to do this using inline C, but yes, anything is possible. (Sorry, I don't have an example for it here) What do you need to do via inline C to prevent the full 800 MB from being downloaded even the first time? On Mon, Mar 14, 2011 at 6:51 PM, Mark Moseley wrote: > On Mon, Mar 14, 2011 at 3:30 PM, Chris Hecker wrote: > > > > Anybody have any ideas? They're not all the same mime type, so I think > > putting them in an uncached dir is better if there's no way to figure it > out > > in vcl. > > > > Chris > > > > > > > > On 2011/03/13 07:26, AD wrote: > >> > >> i dont think you can check the body size (at least it seems that way > >> with the existing req.* objects ). If you know the mime-type of the > >> file you might just be able to pipe the mime type if that works for all > >> file sizes ? > >> > >> I wonder if there is a way to pass the req object into some inline C > >> that can access the body somehow? > >> > >> On Sat, Mar 12, 2011 at 11:28 PM, Chris Hecker >> > wrote: > >> > >> > >> I have a 400mb file that I just want apache to serve. What's the > >> best way to do this? I can put it in a directory and tell varnish > >> not to cache stuff that matches that dir, but I'd rather just make a > >> general rule that varnish should ignore >=20mb files or whatever. > >> > >> Thanks, > >> Chris > > > I was asking about the same thing in this thread: > > http://comments.gmane.org/gmane.comp.web.varnish.misc/4741 > > Check out Tollef's suggestion towards the end. That's what I've been > using. The one drawback is that it's still fetched by varnish > *completely* in the first, not-yet-restarted request, which means that > a) you're fetching it twice; and b) it'll still stored albeit > momentarily, so it'll evict stuff if there's not enough room. > > Before that, I wasn't sending any reqs for anything matching stuff > like .avi or .wmv to varnish (from an nginx frontend). > > It'd be kind of neat if you could do a call-out and for anything > matching a likely large file (i.e. has extension matching .avi, .wmv, > etc), and do a HEAD request to determine the response size (or > whatever you wanted to look for) before doing the GET. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Tue Mar 15 08:42:46 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 00:42:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F17D5.2090002@bizztravel.nl> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> Message-ID: <4D7F1876.7080809@d6.com> Yeah, I think if I can't do it Right (which I define as checking the file size in the vcl), then I'm just going to make blah.com/uncached/* be uncached. I don't want to transfer it once just to throw it away. Chris On 2011/03/15 00:40, Martin Boer wrote: > I've been reading this discussion and imho the most elegant way to do it > is to have a upload directory X and 2 download directories Y and Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> I have a 400mb file that I just want apache to serve. What's the best >> way to do this? I can put it in a directory and tell varnish not to >> cache stuff that matches that dir, but I'd rather just make a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > From martin.boer at bizztravel.nl Tue Mar 15 08:40:05 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 15 Mar 2011 08:40:05 +0100 Subject: best way to not cache large files? In-Reply-To: <4D7C47D0.9050809@d6.com> References: <4D7C47D0.9050809@d6.com> Message-ID: <4D7F17D5.2090002@bizztravel.nl> I've been reading this discussion and imho the most elegant way to do it is to have a upload directory X and 2 download directories Y and Z with a script in between that decides whether it's cacheable and move the file to Y or uncacheable and put it in Z. All the other solutions mentioned in between are far more intelligent and much more likely to backfire in some way or another. Just my 2 cents. Martin On 03/13/2011 05:28 AM, Chris Hecker wrote: > > I have a 400mb file that I just want apache to serve. What's the best > way to do this? I can put it in a directory and tell varnish not to > cache stuff that matches that dir, but I'd rather just make a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From perbu at varnish-software.com Tue Mar 15 10:46:16 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 15 Mar 2011 10:46:16 +0100 Subject: Online training Message-ID: Hi List. I promise I won't do this to often but I wanted to make you aware that we (Varnish Software) will now be offering online training. We have free seats in the upcoming session on the 24th and 25th of March (targeted mainly towards European time zones). We'll have sessions for US timezones in April. We're also planning a session for NZ and Aussies, but no date is set for this session yet. If your interested please drop me a mail. All our training is conducted by varnish cache committers. Regards, Per. -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:50:30 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:50:30 -0300 Subject: Online training In-Reply-To: References: Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Tue Mar 15 14:51:13 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 15 Mar 2011 10:51:13 -0300 Subject: VarnishLog: Broken pipe (Debug) Message-ID: Hi guys, I need some help and I think you can help me. A few days ago I was realized that Varnish is showing some error messages when debug mode is enable on varnishlog: 4741 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2959 Debug c "Write error, retval = -1, len = 237299, errno = Broken pipe" 2591 Debug c "Write error, retval = -1, len = 168289, errno = Broken pipe" 3517 Debug c "Write error, retval = -1, len = 114421, errno = Broken pipe" I want to know what are those error messages and why are they generated. Any suggestion? Thank you! Roberto O. Fern?ndez Crisial -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Tue Mar 15 17:44:26 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 15 Mar 2011 09:44:26 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, Mar 15, 2011 at 12:42 AM, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. ?I don't want to transfer it once just to throw it away. > > Chris > > > On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris Yeah, if you have control over directory names, that's by far the better way to go. I've got shared hosting customers behind mine, so I've got practically no control over where they put stuff under their webroot. From kbrownfield at google.com Tue Mar 15 21:16:11 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 13:16:11 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7F1876.7080809@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I'm assuming that in this case it's not possible for you to have the backend server emit an appropriate Cache-Control or Expires header based on the size of the file? The server itself will know the file size before transmission, and the reindeer caching games would not be necessary. ;-) That's definitely the Right Way, but it would require control over the backend, which is often not possible. Apache unfortunately doesn't have a built-in mechanism/module to emit a header based on file size, at least that I can find. :-( -- kb On Tue, Mar 15, 2011 at 00:42, Chris Hecker wrote: > > Yeah, I think if I can't do it Right (which I define as checking the file > size in the vcl), then I'm just going to make blah.com/uncached/* be > uncached. I don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > >> I've been reading this discussion and imho the most elegant way to do it >> is to have a upload directory X and 2 download directories Y and Z with >> a script in between that decides whether it's cacheable and move the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >>> >>> I have a 400mb file that I just want apache to serve. What's the best >>> way to do this? I can put it in a directory and tell varnish not to >>> cache stuff that matches that dir, but I'd rather just make a general >>> rule that varnish should ignore >=20mb files or whatever. >>> >>> Thanks, >>> Chris >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >>> >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From do.not.eat.yellow.snow at gmail.com Tue Mar 15 21:30:02 2011 From: do.not.eat.yellow.snow at gmail.com (Martin Strand) Date: Tue, 15 Mar 2011 21:30:02 +0100 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield wrote: > > Apache unfortunately doesn't have a built-in mechanism/module to emit a > header based on file size What about the "Content-Length" header? Apache seems to emit that automatically. From kbrownfield at google.com Tue Mar 15 22:59:49 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 14:59:49 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: I think mod_headers/SetEnvIf/etc is applied at request time, before processing occurs (the parameters they have available to them are quite limited). But there may be a way to do later in the chain, and certainly with a custom mod. -- kb On Tue, Mar 15, 2011 at 13:30, Martin Strand < do.not.eat.yellow.snow at gmail.com> wrote: > On Tue, 15 Mar 2011 21:16:11 +0100, Ken Brownfield > wrote: > >> >> Apache unfortunately doesn't have a built-in mechanism/module to emit a >> header based on file size >> > > What about the "Content-Length" header? Apache seems to emit that > automatically. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Wed Mar 16 00:56:37 2011 From: checker at d6.com (Chris Hecker) Date: Tue, 15 Mar 2011 16:56:37 -0700 Subject: best way to not cache large files? In-Reply-To: References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> Message-ID: <4D7FFCB5.6030105@d6.com> I'm not sure I understand. I have control over the back end, the front end, the middle end, all the ends. However, I thought the problem was there was no way to get varnish to read the header without loading the file into the cache? If that's not true, then shouldn't Content-Length be enough? Chris On 2011/03/15 13:16, Ken Brownfield wrote: > I'm assuming that in this case it's not possible for you to have the > backend server emit an appropriate Cache-Control or Expires header based > on the size of the file? The server itself will know the file size > before transmission, and the reindeer caching games would not be > necessary. ;-) > > That's definitely the Right Way, but it would require control over the > backend, which is often not possible. Apache unfortunately doesn't have > a built-in mechanism/module to emit a header based on file size, at > least that I can find. :-( > -- > kb > > > > On Tue, Mar 15, 2011 at 00:42, Chris Hecker > wrote: > > > Yeah, I think if I can't do it Right (which I define as checking the > file size in the vcl), then I'm just going to make > blah.com/uncached/* be uncached. I > don't want to transfer it once just to throw it away. > > Chris > > > > On 2011/03/15 00:40, Martin Boer wrote: > > I've been reading this discussion and imho the most elegant way > to do it > is to have a upload directory X and 2 download directories Y and > Z with > a script in between that decides whether it's cacheable and move the > file to Y or uncacheable and put it in Z. > All the other solutions mentioned in between are far more > intelligent > and much more likely to backfire in some way or another. > > Just my 2 cents. > Martin > > > On 03/13/2011 05:28 AM, Chris Hecker wrote: > > > I have a 400mb file that I just want apache to serve. What's > the best > way to do this? I can put it in a directory and tell varnish > not to > cache stuff that matches that dir, but I'd rather just make > a general > rule that varnish should ignore >=20mb files or whatever. > > Thanks, > Chris > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kbrownfield at google.com Wed Mar 16 03:45:46 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 15 Mar 2011 19:45:46 -0700 Subject: best way to not cache large files? In-Reply-To: <4D7FFCB5.6030105@d6.com> References: <4D7C47D0.9050809@d6.com> <4D7F17D5.2090002@bizztravel.nl> <4D7F1876.7080809@d6.com> <4D7FFCB5.6030105@d6.com> Message-ID: If you have control over the backend (Apache) it should be made to emit a Cache-Control or Expires header to Varnish to make the object non-cacheable *if the file is too large*. Apache will know the file's size before a request occurs. I was talking about logic within Apache, not Varnish. This is how it's "supposed" to happen. With Varnish, I see no way to avoid downloading the entire file every time. You can control whether the file *stays* in cache, but that's it. If there were a URL pattern (e.g., magic subdirectory), you could conceivably switch to pipe in those cases. Thinking out loud... HTTP servers will send a response to a HEAD request with a Content-Length header that represents the length of the full object had a GET been performed. If your Apache does this (some configurations will disable this), one hack would be to have Varnish send a HEAD request to Apache for every object, set a req flag if the returned content length is too large, then restart, and then have logic that will force pipe if it's too large, otherwise pass. This will double the hits to the back-end, however, so some conditionals would help (only .mov, or only a certain subdirectory, etc.) And I've never tried changing a GET to a HEAD with VCL or inline-C. But usually when something is that difficult, it's a square peg and a round hole. :-) FWIW, -- kb On Tue, Mar 15, 2011 at 16:56, Chris Hecker wrote: > > I'm not sure I understand. I have control over the back end, the front > end, the middle end, all the ends. However, I thought the problem was there > was no way to get varnish to read the header without loading the file into > the cache? If that's not true, then shouldn't Content-Length be enough? > > Chris > > On 2011/03/15 13:16, Ken Brownfield wrote: > >> I'm assuming that in this case it's not possible for you to have the >> backend server emit an appropriate Cache-Control or Expires header based >> on the size of the file? The server itself will know the file size >> before transmission, and the reindeer caching games would not be >> necessary. ;-) >> >> That's definitely the Right Way, but it would require control over the >> backend, which is often not possible. Apache unfortunately doesn't have >> a built-in mechanism/module to emit a header based on file size, at >> least that I can find. :-( >> -- >> kb >> >> >> >> On Tue, Mar 15, 2011 at 00:42, Chris Hecker > > wrote: >> >> >> Yeah, I think if I can't do it Right (which I define as checking the >> file size in the vcl), then I'm just going to make >> blah.com/uncached/* be uncached. I >> don't want to transfer it once just to throw it away. >> >> Chris >> >> >> >> On 2011/03/15 00:40, Martin Boer wrote: >> >> I've been reading this discussion and imho the most elegant way >> to do it >> is to have a upload directory X and 2 download directories Y and >> Z with >> a script in between that decides whether it's cacheable and move >> the >> file to Y or uncacheable and put it in Z. >> All the other solutions mentioned in between are far more >> intelligent >> and much more likely to backfire in some way or another. >> >> Just my 2 cents. >> Martin >> >> >> On 03/13/2011 05:28 AM, Chris Hecker wrote: >> >> >> I have a 400mb file that I just want apache to serve. What's >> the best >> way to do this? I can put it in a directory and tell varnish >> not to >> cache stuff that matches that dir, but I'd rather just make >> a general >> rule that varnish should ignore >=20mb files or whatever. >> >> Thanks, >> Chris >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Wed Mar 16 14:58:39 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 09:58:39 -0400 Subject: Session issues when using Varnish Message-ID: I have been investigating an issue on a client's website that is very peculiar. I have verified that the behavior is due to the instance of Varnish that Rackspace configured for us. However, I'm not sure if this constitutes a bug in Varnish or a configuration error. I'm hoping someone can verify it for me one way or the other. Here is the scenario: Some of our PHP pages are protected by way of verifying that certain session variables are set. If not, the user is sent to the login page. We have observed that on URLs in which there is a querystring, and when the last value of that querystring ends in ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule that routes requests from port 80 to Varnish, the session is reset completely. Oddly enough, no other extension seems to have this affect. I have recreated this behavior in a clean PHP file, which I've attached. You can test this script on your own using the following URLs. The ones marked with the * are where the session gets reset. http://localhost/test_cdb.php http://localhost/test_cdb.php?foo=1 http://localhost/test_cdb.php?foo=1&baz=bix http://localhost/test_cdb.php?foo=1&baz=bix.far http://localhost/test_cdb.php?foo=1&baz=bix.far.jpg * http://localhost/test_cdb.php?foo=1&baz=bix.fur http://localhost/test_cdb.php?foo=1&baz=bix.gif * http://localhost/test_cdb.php?foo=1&baz=bix.bmp http://localhost/test_cdb.php?foo=1&baz=bix.php http://localhost/test_cdb.php?foo=1&baz=bix.exe http://localhost/test_cdb.php?foo=1&baz=bix.tar http://localhost/test_cdb.php?foo=1&baz=bix.jpeg * Here is the rule we created for iptables -A PREROUTING -t nat -d x.x.x.128 -p tcp -m tcp --dport 80 -j DNAT --to-destination x.x.x.128:6081 Chris Bloom Internet Application Developer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_cdb.php Type: application/x-httpd-php Size: 721 bytes Desc: not available URL: From bjorn at ruberg.no Wed Mar 16 15:15:04 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 15:15:04 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80C5E8.8040503@ruberg.no> On 03/16/2011 02:58 PM, Chris Bloom wrote: > I have been investigating an issue on a client's website that is very > peculiar. I have verified that the behavior is due to the instance of > Varnish that Rackspace configured for us. However, I'm not sure if this > constitutes a bug in Varnish or a configuration error. I'm hoping > someone can verify it for me one way or the other. > > Here is the scenario: Some of our PHP pages are protected by way of > verifying that certain session variables are set. If not, the user is > sent to the login page. We have observed that on URLs in which there is > a querystring, and when the last value of that querystring ends in > ".jpg", ".jpeg", ".gif", or ".png", and when we have an iptable rule > that routes requests from port 80 to Varnish, the session is reset > completely. Oddly enough, no other extension seems to have this affect. This *looks* like some general Varnish rule that removes any (session) cookies when the URL (including the query string) ends with jpg, jpeg etc. However, since you did not include the Varnish configuration or Varnish logs, you will only receive guesswork. Your test file is of absolutely no value as long as you didn't a) provide the real URL for remote diagnosis and/or b) the VCL for local testing. Without any information on the Varnish configuration, further requests for assistance should be directed to your provider. You need someone with access to the VCL to be able to confirm your issue. The symptoms should be sufficiently descriptive, as long as they reach someone who can do anything about it. We can't. Good luck, -- Bj?rn From chrisbloom7 at gmail.com Wed Mar 16 16:55:56 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Wed, 16 Mar 2011 11:55:56 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: Thank you, Bjorn, for your response. Our hosting provider tells me that the following routines have been added to the default config. sub vcl_recv { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.cookie; return (lookup); } } sub vcl_fetch { # Cache things with these extensions if (req.url ~ "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { unset req.http.set-cookie; set obj.ttl = 1h; } } Clearly the req.url variable contains the entire request URL, including the querystring. Is there another variable that I should be using instead that would only include the script name? If this is the default behavior, I'm inclined to cry "bug". You can test that other script for yourself by substituting maxisavergroup.com for the domain in the example URLs I provided. PS: We are using Varnish 2.0.6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Wed Mar 16 17:48:47 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 16 Mar 2011 13:48:47 -0300 Subject: Limited urls Message-ID: Hi guys, I am trying to restrict some access to my Varnish. I want to accept only requests for domain1.com and domain2.com, but deny access to server's IP address. This is my vcl_recv: if (req.http.host ~ ".*domain1.*") { set req.backend = domain1; } elseif (req.http.host ~ ".*domain2.*") { set req.backend = domain2; } else { error 405 "Sorry!"; } Am I doing the right way? Do you have any suggestion? Thank you, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Wed Mar 16 19:03:52 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 16 Mar 2011 19:03:52 +0100 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D80FB88.5030907@ruberg.no> On 03/16/2011 04:55 PM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } This is a rather standard config, not designed for corner cases like yours. > Clearly the req.url variable contains the entire request URL, including > the querystring. Is there another variable that I should be using > instead that would only include the script name? If this is the default > behavior, I'm inclined to cry "bug". You can start crying bug after you've convinced the rest of the Internet world, including all browsers, that the query string should not be considered part of the URL. In the meantime, I suggest you let your provider know that your application has special requirements that they will need to accommodate. Your provider can't offer proper service when they don't know your requirements. To provide you with a useful Varnish configuration, your provider needs to know quite a few things about how your application works. This includes any knowledge of cookies and when Varnish should and should not allow them. Since you ask the Varnish community instead of discussing this with your provider, I guess these requirements were never communicated. A few tips you and your provider can consider: a) Perhaps a second cookie could be set by the backend application for logged-in users. A configuration could be made so that Varnish would choose to not remove cookies from the file suffixes listed if this cookie was present. b) If the path(s)/filename(s) where the query string may include the mentioned file suffixes are identifiable, your provider could create an exception for those. E.g. if ?foo=bar.jpg only occurs with /some/test/file.php, then the if clause in vcl_recv could take that into consideration. c) Regular expressions in 2.0.6 are case insensitive, so listing both "jpg" and "JPG" in the same expression is unnecessary. - Bj?rn From davidpetzel at gmail.com Wed Mar 16 19:21:21 2011 From: davidpetzel at gmail.com (David Petzel) Date: Wed, 16 Mar 2011 14:21:21 -0400 Subject: Question on Re-Using Backend Probes Message-ID: I'm really new to varnish, so please forgive me if this answered elsewhere, I did some searching and couldn't seem to find it however. I was reviewing the documention and I have a question about back end probes. I'm setting up a directory that will have 10-12 backends. I want each backend to use the same health check, but I don't want to have to re-define the prove 10-12 times. Is it possible to define the probe externally to the backend configuration, and then reference it. Something like the following? probe myProbe1 { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } backend server1 { .host = "server1.example.com"; .probe = myProbe1 } backend server2 { .host = "server2.example.com"; .probe = myProbe1 } All of the examples I've come across have the probe redefined again. for example on http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html#health-checks They show the following example which feels redundant. backend server1 { .host = "server1.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } backend server2 { .host = "server2.example.com"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:29:10 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:29:10 +0100 Subject: Question on Re-Using Backend Probes In-Reply-To: References: Message-ID: Hi David. On Wed, Mar 16, 2011 at 7:21 PM, David Petzel wrote: > I'm really new to varnish, so please forgive me if this answered elsewhere, > I did some searching and couldn't seem to find it however. > I was reviewing the documention and I have a question about back end probes. > I'm setting up a directory that will have 10-12 backends. I want each > backend to use the same health check, but I don't want to have to re-define > the prove 10-12 times. Is it possible to define the probe externally to the > backend configuration, and then reference it. No. That is not possible. However, you could use a makro language of sorts to pre-process the configuration. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From dhelkowski at sbgnet.com Wed Mar 16 19:51:35 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 14:51:35 -0400 Subject: Session issues when using Varnish In-Reply-To: References: Message-ID: <4D8106B7.5030604@sbgnet.com> The vcl you are showing may be standard, but as you have noticed it will not work properly when query strings end in a file extension. I encountered this same problem after blindly copying from example varnish configurations. Before the check is done, the query parameter needs to be stripped from the url. Example of an alternate way to check the extensions: sub vcl_recv { ... set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } ... } Doubtless others will say this approach is wrong for some reason or another. I use it in a production environment and it works fine though. Pass it along to your hosting provider and request that they consider changing their config. Note that the above code will cause the end user to receive a 'ext' header with the file extension. You can add a 'remove req.http.ext' after the code if you don't want that to happen... Another thing to consider is that whether it this is a bug or not; it is a common problem with varnish configurations, and as such can be used on most varnish servers to force them to return things differently then they normally would. IE: if some backend script is a huge request and eats up resources, sending it a '?.jpg' could be used to hit it repeatedly and bring about a denial of service. On 3/16/2011 11:55 AM, Chris Bloom wrote: > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been > added to the default config. > > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > > Clearly the req.url variable contains the entire request URL, > including the querystring. Is there another variable that I should be > using instead that would only include the script name? If this is the > default behavior, I'm inclined to cry "bug". > > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the > example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 16 19:59:02 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 16 Mar 2011 19:59:02 +0100 Subject: Session issues when using Varnish In-Reply-To: <4D8106B7.5030604@sbgnet.com> References: <4D8106B7.5030604@sbgnet.com> Message-ID: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ??? ... > ??? set req.http.ext = regsub( req.url, "\?.+$", "" ); > ??? set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > ??? if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > ????? return(lookup); > ??? } > ??? ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.cookie; > ?? ?return (lookup); > ??} > } > sub vcl_fetch { > ??# Cache things with these extensions > ??if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > ?? ?unset req.http.set-cookie; > ?? ?set obj.ttl = 1h; > ??} > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Wed Mar 16 20:30:31 2011 From: straightflush at gmail.com (AD) Date: Wed, 16 Mar 2011 15:30:31 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { * remove req.http.ext; * return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > Hi David, List. > > I think I'll use this snipplet in the documentation if you don't mind. > I need to work in more regsub calls there anyway. > > Cheers, > > Per. > > On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski > wrote: > > The vcl you are showing may be standard, but as you have noticed it will > not > > work properly when > > query strings end in a file extension. I encountered this same problem > after > > blindly copying from > > example varnish configurations. > > Before the check is done, the query parameter needs to be stripped from > the > > url. > > Example of an alternate way to check the extensions: > > > > sub vcl_recv { > > ... > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > > if( req.http.ext ~ > > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > > return(lookup); > > } > > ... > > } > > > > Doubtless others will say this approach is wrong for some reason or > another. > > I use it in a production > > environment and it works fine though. Pass it along to your hosting > provider > > and request that they > > consider changing their config. > > > > Note that the above code will cause the end user to receive a 'ext' > header > > with the file extension. > > You can add a 'remove req.http.ext' after the code if you don't want that > to > > happen... > > > > Another thing to consider is that whether it this is a bug or not; it is > a > > common problem with varnish > > configurations, and as such can be used on most varnish servers to force > > them to return things > > differently then they normally would. IE: if some backend script is a > huge > > request and eats up resources, sending > > it a '?.jpg' could be used to hit it repeatedly and bring about a denial > of > > service. > > > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > > > Thank you, Bjorn, for your response. > > Our hosting provider tells me that the following routines have been added > to > > the default config. > > sub vcl_recv { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.cookie; > > return (lookup); > > } > > } > > sub vcl_fetch { > > # Cache things with these extensions > > if (req.url ~ > > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > > unset req.http.set-cookie; > > set obj.ttl = 1h; > > } > > } > > Clearly the req.url variable contains the entire request URL, including > the > > querystring. Is there another variable that I should be using instead > that > > would only include the script name? If this is the default behavior, I'm > > inclined to cry "bug". > > You can test that other script for yourself by substituting > > maxisavergroup.com for the domain in the example URLs I provided. > > PS: We are using Varnish 2.0.6 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 16 23:58:31 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Wed, 16 Mar 2011 15:58:31 -0700 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD wrote: > You can remove the header so it doesnt get set > > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > * remove req.http.ext; * > return(lookup); > } > > > > On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: > >> Hi David, List. >> >> I think I'll use this snipplet in the documentation if you don't mind. >> I need to work in more regsub calls there anyway. >> >> Cheers, >> >> Per. >> >> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >> wrote: >> > The vcl you are showing may be standard, but as you have noticed it will >> not >> > work properly when >> > query strings end in a file extension. I encountered this same problem >> after >> > blindly copying from >> > example varnish configurations. >> > Before the check is done, the query parameter needs to be stripped from >> the >> > url. >> > Example of an alternate way to check the extensions: >> > >> > sub vcl_recv { >> > ... >> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> > if( req.http.ext ~ >> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> > return(lookup); >> > } >> > ... >> > } >> > >> > Doubtless others will say this approach is wrong for some reason or >> another. >> > I use it in a production >> > environment and it works fine though. Pass it along to your hosting >> provider >> > and request that they >> > consider changing their config. >> > >> > Note that the above code will cause the end user to receive a 'ext' >> header >> > with the file extension. >> > You can add a 'remove req.http.ext' after the code if you don't want >> that to >> > happen... >> > >> > Another thing to consider is that whether it this is a bug or not; it is >> a >> > common problem with varnish >> > configurations, and as such can be used on most varnish servers to force >> > them to return things >> > differently then they normally would. IE: if some backend script is a >> huge >> > request and eats up resources, sending >> > it a '?.jpg' could be used to hit it repeatedly and bring about a denial >> of >> > service. >> > >> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >> > >> > Thank you, Bjorn, for your response. >> > Our hosting provider tells me that the following routines have been >> added to >> > the default config. >> > sub vcl_recv { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.cookie; >> > return (lookup); >> > } >> > } >> > sub vcl_fetch { >> > # Cache things with these extensions >> > if (req.url ~ >> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >> > unset req.http.set-cookie; >> > set obj.ttl = 1h; >> > } >> > } >> > Clearly the req.url variable contains the entire request URL, including >> the >> > querystring. Is there another variable that I should be using instead >> that >> > would only include the script name? If this is the default behavior, I'm >> > inclined to cry "bug". >> > You can test that other script for yourself by substituting >> > maxisavergroup.com for the domain in the example URLs I provided. >> > PS: We are using Varnish 2.0.6 >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> >> >> >> -- >> Per Buer, Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoiz.shine at gmail.com Thu Mar 17 03:34:55 2011 From: amoiz.shine at gmail.com (Sharl.Jimh.Tsin) Date: Thu, 17 Mar 2011 10:34:55 +0800 Subject: Limited urls In-Reply-To: References: Message-ID: yes,it is right. Best regards, Sharl.Jimh.Tsin (From China **Obviously Taiwan INCLUDED**) 2011/3/17 Roberto O. Fern?ndez Crisial : > Hi guys, > I am trying to?restrict?some access to my Varnish. I?want?to accept only > requests for?domain1.com and?domain2.com, but deny access to server's IP > address. This is my vcl_recv: > if (req.http.host ~ ".*domain1.*") > { > > set req.backend = domain1; > > } > elseif (req.http.host ~ ".*domain2.*") > { > > set req.backend = domain2; > > } > else > { > > error 405 "Sorry!"; > > } > Am I doing the right way? Do you have any suggestion? > Thank you, > Roberto. > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From dhelkowski at sbgnet.com Thu Mar 17 03:40:16 2011 From: dhelkowski at sbgnet.com (David Helkowski) Date: Wed, 16 Mar 2011 22:40:16 -0400 (EDT) Subject: Session issues when using Varnish In-Reply-To: <762709593.897379.1300329124628.JavaMail.root@mail-01.sbgnet.com> Message-ID: <1185929555.897407.1300329616885.JavaMail.root@mail-01.sbgnet.com> I agree that this is a better expression to use if you are only testing one set of extensions and don't intend to do anything else with the extension itself. Using the same method: ( if you want to capture the extension for some reason ) set req.http.ext = regsub( req.url, "^[^\?]*?\.([a-zA-Z]+)($|\?)", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { return(lookup); } I also have not tested this; but it should work assuming the other example works. ----- Original Message ----- From: "Ken Brownfield" To: varnish-misc at varnish-cache.org Sent: Wednesday, March 16, 2011 6:58:31 PM Subject: Re: Session issues when using Varnish Or not set a header at all: if ( req.url ~ "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { unset req.http.cookie return(lookup); } Didn't test the regex with Varnish's regex handling. -- kb On Wed, Mar 16, 2011 at 12:30, AD < straightflush at gmail.com > wrote: You can remove the header so it doesnt get set set req.http.ext = regsub( req.url, "\?.+$", "" ); set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); if( req.http.ext ~ "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { remove req.http.ext; return(lookup); } On Wed, Mar 16, 2011 at 2:59 PM, Per Buer < perbu at varnish-software.com > wrote: Hi David, List. I think I'll use this snipplet in the documentation if you don't mind. I need to work in more regsub calls there anyway. Cheers, Per. On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski < dhelkowski at sbgnet.com > wrote: > The vcl you are showing may be standard, but as you have noticed it will not > work properly when > query strings end in a file extension. I encountered this same problem after > blindly copying from > example varnish configurations. > Before the check is done, the query parameter needs to be stripped from the > url. > Example of an alternate way to check the extensions: > > sub vcl_recv { > ... > set req.http.ext = regsub( req.url, "\?.+$", "" ); > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); > if( req.http.ext ~ > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { > return(lookup); > } > ... > } > > Doubtless others will say this approach is wrong for some reason or another. > I use it in a production > environment and it works fine though. Pass it along to your hosting provider > and request that they > consider changing their config. > > Note that the above code will cause the end user to receive a 'ext' header > with the file extension. > You can add a 'remove req.http.ext' after the code if you don't want that to > happen... > > Another thing to consider is that whether it this is a bug or not; it is a > common problem with varnish > configurations, and as such can be used on most varnish servers to force > them to return things > differently then they normally would. IE: if some backend script is a huge > request and eats up resources, sending > it a '?.jpg' could be used to hit it repeatedly and bring about a denial of > service. > > On 3/16/2011 11:55 AM, Chris Bloom wrote: > > Thank you, Bjorn, for your response. > Our hosting provider tells me that the following routines have been added to > the default config. > sub vcl_recv { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.cookie; > return (lookup); > } > } > sub vcl_fetch { > # Cache things with these extensions > if (req.url ~ > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { > unset req.http.set-cookie; > set obj.ttl = 1h; > } > } > Clearly the req.url variable contains the entire request URL, including the > querystring. Is there another variable that I should be using instead that > would only include the script name? If this is the default behavior, I'm > inclined to cry "bug". > You can test that other script for yourself by substituting > maxisavergroup.com for the domain in the example URLs I provided. > PS: We are using Varnish 2.0.6 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer, Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From weipeng.pengw at alibaba-inc.com Thu Mar 17 04:01:01 2011 From: weipeng.pengw at alibaba-inc.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 17 Mar 2011 11:01:01 +0800 Subject: ESI problem in Red Hat Enterprise Linux Message-ID: hi all: i install varnish using the source code "varnish-2.1.4.tar.gz" in ubuntu10.4 and "Red Hat Enterprise Linux Server release 5.3 (Tikanga)" when i use ESI in ubuntu, it's ok, both the main page and the esi included page can be showed but the same configure file and the same pages in redhat, only the main page can be showed the configure file is as below: backend default { .host = "127.0.0.1"; .port = "80"; } backend javaeye { .host = "www.javaeye.com"; .port = "80"; .connect_timeout = 1s; .first_byte_timeout = 5s; .between_bytes_timeout = 2s; } acl purge { "localhost"; "127.0.0.1"; "192.168.1.0"/24; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/forums/") { set req.backend = javaeye; set req.http.Host="www.javaeye.com"; return (pass); } else { set req.backend = default; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { return (pass); } return (lookup); } sub vcl_pass { return (pass); } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } if (!obj.cacheable) { return (pass); } return (deliver); } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } return (fetch); } sub vcl_fetch { if (req.url ~ "/[a-z0-9]+.html$") { esi; /* Do ESI processing */ } remove beresp.http.Last-Modified; remove beresp.http.Etag; #set beresp.http.Cache-Control="no-cache"; if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } if (req.url ~ "^/[a-z]+/") { /* We only deal with GET and HEAD by default */ return (pass); } return (deliver); } sub vcl_deliver { return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

Guru Meditation:

XID: "} req.xid {"


Varnish cache server

"}; return (deliver); } the main page url: http://10.20.156.7:8000/haha.html the main page content: 123haha111 please help me! thanks ! Regards! pwlazy ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisbloom7 at gmail.com Thu Mar 17 16:59:59 2011 From: chrisbloom7 at gmail.com (Chris Bloom) Date: Thu, 17 Mar 2011 11:59:59 -0400 Subject: Session issues when using Varnish In-Reply-To: References: <4D8106B7.5030604@sbgnet.com> Message-ID: FYI - I forwarded Ken's suggested solution to our Rackspace tech who updated our config. This appears to have resolved our issue. Thanks! Chris Bloom Internet Application Developer On Wed, Mar 16, 2011 at 6:58 PM, Ken Brownfield wrote: > Or not set a header at all: > > if ( req.url ~ > "^[^\?]*?\.(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)($|\?)" ) { > unset req.http.cookie > return(lookup); > } > > Didn't test the regex with Varnish's regex handling. > -- > kb > > > > On Wed, Mar 16, 2011 at 12:30, AD wrote: > >> You can remove the header so it doesnt get set >> >> set req.http.ext = regsub( req.url, "\?.+$", "" ); >> set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" ); >> if( req.http.ext ~ >> "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >> * remove req.http.ext; * >> return(lookup); >> } >> >> >> >> On Wed, Mar 16, 2011 at 2:59 PM, Per Buer wrote: >> >>> Hi David, List. >>> >>> I think I'll use this snipplet in the documentation if you don't mind. >>> I need to work in more regsub calls there anyway. >>> >>> Cheers, >>> >>> Per. >>> >>> On Wed, Mar 16, 2011 at 7:51 PM, David Helkowski >>> wrote: >>> > The vcl you are showing may be standard, but as you have noticed it >>> will not >>> > work properly when >>> > query strings end in a file extension. I encountered this same problem >>> after >>> > blindly copying from >>> > example varnish configurations. >>> > Before the check is done, the query parameter needs to be stripped from >>> the >>> > url. >>> > Example of an alternate way to check the extensions: >>> > >>> > sub vcl_recv { >>> > ... >>> > set req.http.ext = regsub( req.url, "\?.+$", "" ); >>> > set req.http.ext = regsub( req.http.ext, ".+\.([a-zA-Z]+)$", "\1" >>> ); >>> > if( req.http.ext ~ >>> > "^(js|gif|jpg|jpeg|png|ico|css|html|ehtml|shtml|swf)$" ) { >>> > return(lookup); >>> > } >>> > ... >>> > } >>> > >>> > Doubtless others will say this approach is wrong for some reason or >>> another. >>> > I use it in a production >>> > environment and it works fine though. Pass it along to your hosting >>> provider >>> > and request that they >>> > consider changing their config. >>> > >>> > Note that the above code will cause the end user to receive a 'ext' >>> header >>> > with the file extension. >>> > You can add a 'remove req.http.ext' after the code if you don't want >>> that to >>> > happen... >>> > >>> > Another thing to consider is that whether it this is a bug or not; it >>> is a >>> > common problem with varnish >>> > configurations, and as such can be used on most varnish servers to >>> force >>> > them to return things >>> > differently then they normally would. IE: if some backend script is a >>> huge >>> > request and eats up resources, sending >>> > it a '?.jpg' could be used to hit it repeatedly and bring about a >>> denial of >>> > service. >>> > >>> > On 3/16/2011 11:55 AM, Chris Bloom wrote: >>> > >>> > Thank you, Bjorn, for your response. >>> > Our hosting provider tells me that the following routines have been >>> added to >>> > the default config. >>> > sub vcl_recv { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.cookie; >>> > return (lookup); >>> > } >>> > } >>> > sub vcl_fetch { >>> > # Cache things with these extensions >>> > if (req.url ~ >>> > "\.(js|css|JPG|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf)$") { >>> > unset req.http.set-cookie; >>> > set obj.ttl = 1h; >>> > } >>> > } >>> > Clearly the req.url variable contains the entire request URL, including >>> the >>> > querystring. Is there another variable that I should be using instead >>> that >>> > would only include the script name? If this is the default behavior, >>> I'm >>> > inclined to cry "bug". >>> > You can test that other script for yourself by substituting >>> > maxisavergroup.com for the domain in the example URLs I provided. >>> > PS: We are using Varnish 2.0.6 >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> > >>> >>> >>> >>> -- >>> Per Buer, Varnish Software >>> Phone: <%2B47%2021%2098%2092%2061>+47 21 98 92 61 / Mobile: >>> <%2B47%20958%2039%20117>+47 958 39 117 / Skype: per.buer >>> Varnish makes websites fly! >>> Want to learn more about Varnish? >>> http://www.varnish-software.com/whitepapers >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.begumisa at gmail.com Fri Mar 18 02:24:19 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Thu, 17 Mar 2011 18:24:19 -0700 Subject: Request body of POST Message-ID: Is there anyway I can see the request body of a POST in the varnish logs generated from running the varnishlog command? Thanks. Best Regards, Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Fri Mar 18 09:22:46 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 18 Mar 2011 09:22:46 +0100 Subject: Request body of POST In-Reply-To: (Joseph Begumisa's message of "Thu, 17 Mar 2011 18:24:19 -0700") References: Message-ID: <87y64ckfc9.fsf@qurzaw.varnish-software.com> ]] Joseph Begumisa Hi, | Is there anyway I can see the request body of a POST in the varnish logs | generated from running the varnishlog command? Thanks. No. Use tcpdump or wireshark/tshark to get at that. regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From j.begumisa at gmail.com Fri Mar 18 18:16:54 2011 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Fri, 18 Mar 2011 10:16:54 -0700 Subject: Request body of POST In-Reply-To: <87y64ckfc9.fsf@qurzaw.varnish-software.com> References: <87y64ckfc9.fsf@qurzaw.varnish-software.com> Message-ID: Thanks. Best Regards, Joseph On Fri, Mar 18, 2011 at 1:22 AM, Tollef Fog Heen < tfheen at varnish-software.com> wrote: > ]] Joseph Begumisa > > Hi, > > | Is there anyway I can see the request body of a POST in the varnish logs > | generated from running the varnishlog command? Thanks. > > No. > > Use tcpdump or wireshark/tshark to get at that. > > regards, > -- > Tollef Fog Heen > Varnish Software > t: +47 21 98 92 64 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Mar 20 22:12:32 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:12:32 +0000 Subject: Are varnish subroutines reentrant? Message-ID: Would I be correct in assuming that any subroutines not using inline C are reentrant? I'm talking about non-defaulted, site-specific subroutines here, not vcl_* ones, as I presume the question is possibly meaningless for the vcl_* set. Many thanks, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:26:58 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:26:58 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:12:32 GMT." Message-ID: <98274.1300656418@critter.freebsd.dk> In message , Jona than Matthews writes: >Would I be correct in assuming that any subroutines not using inline C >are reentrant? >I'm talking about non-defaulted, site-specific subroutines here, not >vcl_* ones, as I presume the question is possibly meaningless for the >vcl_* set. It would probably be a lot more easy to answer, if you told me the names of the subroutines you are interested in. In general, reentrancy is highly variable in Varnish. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:39:22 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:39:22 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <98274.1300656418@critter.freebsd.dk> References: <98274.1300656418@critter.freebsd.dk> Message-ID: On 20 March 2011 21:26, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: >>Would I be correct in assuming that any subroutines not using inline C >>are reentrant? >>I'm talking about non-defaulted, site-specific subroutines here, not >>vcl_* ones, as I presume the question is possibly meaningless for the >>vcl_* set. > > It would probably be a lot more easy to answer, if you told me the > names of the subroutines you are interested in. They're ones that I'm defining in my VCL. They're site-specific helper functions that don't exist in the default VCL. I'm not asking for an analysis of the reentrant nature of a specific algorithm or block of code, just to know if there's anything underlying the VCL at any specific points in the route through the standard subroutines that would make being reentrant more complex to deal with than solely making sure the algorithm is reentrant. If that makes sense :-) Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From phk at phk.freebsd.dk Sun Mar 20 22:43:49 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 20 Mar 2011 21:43:49 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: Your message of "Sun, 20 Mar 2011 21:39:22 GMT." Message-ID: <37820.1300657429@critter.freebsd.dk> In message , Jona than Matthews writes: >just to know if there's anything >underlying the VCL at any specific points in the route through the >standard subroutines that would make being reentrant more complex to >deal with than solely making sure the algorithm is reentrant. If that >makes sense :-) As long as you take care of the usual stuff, (static/global variables etc) there shouldn't be any issues. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sun Mar 20 22:58:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 20 Mar 2011 21:58:14 +0000 Subject: Are varnish subroutines reentrant? In-Reply-To: <37820.1300657429@critter.freebsd.dk> References: <37820.1300657429@critter.freebsd.dk> Message-ID: On 20 March 2011 21:43, Poul-Henning Kamp wrote: > In message , Jona > than Matthews writes: > >>just to know if there's anything >>underlying the VCL at any specific points in the route through the >>standard subroutines that would make being reentrant more complex to >>deal with than solely making sure the algorithm is reentrant. ?If that >>makes sense :-) > > As long as you take care of the usual stuff, (static/global variables > etc) there shouldn't be any issues. Many thanks. Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From krjeschke at omniti.com Fri Mar 18 20:18:10 2011 From: krjeschke at omniti.com (Katherine Jeschke) Date: Fri, 18 Mar 2011 15:18:10 -0400 Subject: Surge 2011 Conference CFP Message-ID: We are excited to announce Surge 2011, the Scalability and Performance Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses on case studies that demonstrate successes (and failures) in Web applications and Internet architectures. This year, we're adding Hack Day on September 28th. The inaugural, 2010 conference (http://omniti.com/surge/2010) was a smashing success and we are currently accepting submissions for papers through April 3rd. You can find more information about topics online: http://omniti.com/surge/2011 2010 attendees compared Surge to the early days of Velocity, and our speakers received 3.5-4 out of 4 stars for quality of presentation and quality of content! Nearly 90% of first-year attendees are planning to come again in 2011. For more information about the CFP or sponsorship of the event, please contact us at surge (AT) omniti (DOT) com. -- Katherine Jeschke Marketing Director OmniTI Computer Consulting, Inc. 7070 Samuel Morse Drive, Ste.150 Columbia, MD 21046 O: 410/872-4910, 222 C: 443/643-6140 omniti.com circonus.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Mar 21 16:08:45 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 21 Mar 2011 15:08:45 +0000 Subject: Warming the cache from an existing squid proxy instance Message-ID: Hi all - I've got some long-running squid instances, mainly used for caching medium-sized binaries, which I'd like to replace with some varnish instances. The binaries are quite heavy to regenerate on the distant origin servers and there's a large number of them. Hence, I'd like to use the squid cache as a target to warm a (new, nearby) varnish instance instead of just pointing the varnish instance at the remote origin servers. The squid instances are running in proxy mode, and require (I *believe*) an HTTP CONNECT. I've looked around for people trying the same thing, but haven't come across any success stories. I'm perfectly prepared to be told that I simply have to reconfigure the squid instances in mixed proxy/origin-server mode, and that there's no way around it, but I thought I'd ask the list for guidance first ... Any thoughts? Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From scott at dor.ky Mon Mar 21 22:10:09 2011 From: scott at dor.ky (Scott Wilcox) Date: Mon, 21 Mar 2011 21:10:09 +0000 Subject: Using Varnish with SSL Message-ID: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Hello folks, I've recently been looking at introducing Varnish into my current fronted system. From what I've seen and in my own testing, I've been very impressed with the performance gains. One question I do have, is about using SSL with Varnish. I'll be using Varnish to push over to an Apache server which runs on :80 and :443 at present, serving also identical content (if needed for simplicity, these can be merged). What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. Thoughts, comments and suggestions all most welcome! Scott. From perbu at varnish-software.com Mon Mar 21 22:37:42 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 21 Mar 2011 22:37:42 +0100 Subject: Using Varnish with SSL In-Reply-To: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: Hi Scott. On Mon, Mar 21, 2011 at 10:10 PM, Scott Wilcox wrote: > > What I'd like to know is the best way to configure this (and if its possible actually). I very much need to keep SSL access open, I realise that I could just run apache 'native' on :443, but I'd be a lot happier if I can push it through Varnish. www.varnish-cache.org and www.varnish-software.com are running a hidden apache (w/PHP) behind Varnish. On port 443 there is a minimalistic nginx which does the SSL stuff and connects to Varnish. It works well. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From straightflush at gmail.com Tue Mar 22 02:49:18 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 21:49:18 -0400 Subject: obj.ttl not available in vcl_deliver Message-ID: Hello, Per the docs it says that all the obj.* values should be available in vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get the following error: Variable 'obj.ttl' not accessible in method 'vcl_deliver'. This is on Ubuntu, Varnish 2.1.5. Any ideas ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Tue Mar 22 03:30:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Mon, 21 Mar 2011 19:30:24 -0700 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: Per lots of posts on this list, obj is now baresp in newer Varnish versions. It sounds like the documentation for this change hasn't been fully propagated. -- kb On Mon, Mar 21, 2011 at 18:49, AD wrote: > Hello, > > Per the docs it says that all the obj.* values should be available in > vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get > the following error: > > Variable 'obj.ttl' not accessible in method 'vcl_deliver'. > > This is on Ubuntu, Varnish 2.1.5. Any ideas ? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Tue Mar 22 03:39:50 2011 From: straightflush at gmail.com (AD) Date: Mon, 21 Mar 2011 22:39:50 -0400 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: hmm, it seems beresp.* is available in vcl_fetch, but not vcl_deliver. I need obj.ttl in vcl_deliver (to get the TTL as it is in the cache, not from the backend). On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > Per lots of posts on this list, obj is now baresp in newer Varnish > versions. It sounds like the documentation for this change hasn't been > fully propagated. > -- > kb > > > > On Mon, Mar 21, 2011 at 18:49, AD wrote: > >> Hello, >> >> Per the docs it says that all the obj.* values should be available in >> vcl_hit and vcl_deliver, but when trying to use obj.ttl in vcl_deliver i get >> the following error: >> >> Variable 'obj.ttl' not accessible in method 'vcl_deliver'. >> >> This is on Ubuntu, Varnish 2.1.5. Any ideas ? >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattias at nucleus.be Tue Mar 22 10:01:45 2011 From: mattias at nucleus.be (Mattias Geniar) Date: Tue, 22 Mar 2011 10:01:45 +0100 Subject: Using Varnish with SSL In-Reply-To: References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> Message-ID: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Hi Per, > > What I'd like to know is the best way to configure this (and if its possible > actually). I very much need to keep SSL access open, I realise that I could just > run apache 'native' on :443, but I'd be a lot happier if I can push it through > Varnish. > > www.varnish-cache.org and www.varnish-software.com are running a > hidden apache (w/PHP) behind Varnish. On port 443 there is a > minimalistic nginx which does the SSL stuff and connects to Varnish. > It works well. So you're routing all SSL (port 443) via Nginx- > to Varnish -> to Apache? Meaning your nginx is covering the SSL certificates, and your backend is only getting "normal" unencrypted hits? How does that translate to performance? Are you losing a lot by passing it all via nginx first? It's an interesting discussion, I'd love to hear more on the "best practice" implementation of this to get the most performance gain. Regards, Mattias From perbu at varnish-software.com Tue Mar 22 10:25:33 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 22 Mar 2011 10:25:33 +0100 Subject: Using Varnish with SSL In-Reply-To: <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> References: <16378F11-F76A-435F-BA3A-3CB29BE21356@dor.ky> <18834F5BEC10824891FB8B22AC821A5A01556351@nucleus-srv01.Nucleus.local> Message-ID: On Tue, Mar 22, 2011 at 10:01 AM, Mattias Geniar wrote: >> www.varnish-cache.org and www.varnish-software.com are running a >> hidden apache (w/PHP) behind Varnish. On port 443 there is a >> minimalistic nginx which does the SSL stuff and connects to Varnish. >> It works well. > > So you're routing all SSL (port 443) via Nginx- > to Varnish -> to > Apache? Yes. Varnish on port 80 with a Apache backend at some other port on loopback. > Meaning your nginx is covering the SSL certificates, and your > backend is only getting "normal" unencrypted hits? Yes. > How does that translate to performance? Are you losing a lot by passing > it all via nginx first? Not really. There is some HTTP header processing that is unnecessary and that could have been saved if SSL was native in Varnish but all in all, with Varnish you usually have a lot of CPU to spare. I remember a couple of years back we where running the same stack and thousands of hits per second without any issues. > It's an interesting discussion, I'd love to hear more on the "best > practice" implementation of this to get the most performance gain. SSL used to be very expensive. It isn't anymore. There have been good advances in both hardware and software so SSL rather cheap. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From s.welschhoff at lvm.de Tue Mar 22 10:42:54 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Tue, 22 Mar 2011 10:42:54 +0100 Subject: Two Different Backends Message-ID: Hello, I want to configure varnish with two different backends. But with my configuration varnish can't handle with both. sub vcl_recv { if (req.url ~"^/partner/") { set req.backend = directory1; set req.http.host = "partnerservicesq00.xxx.de"; } if (req.url ~"^/schaden/") { set req.backend = directory2; set req.http.host = "servicesq00.xxx.de"; } else { set req.backend = default; } } When I take only the first server and comment the second out it works. But I want to have both. Kind regards Stefan Welschhoff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From varnish at mm.quex.org Tue Mar 22 10:54:17 2011 From: varnish at mm.quex.org (Michael Alger) Date: Tue, 22 Mar 2011 17:54:17 +0800 Subject: Two Different Backends In-Reply-To: References: Message-ID: <20110322095417.GA26096@grum.quex.org> On Tue, Mar 22, 2011 at 10:42:54AM +0100, Stefan Welschhoff wrote: > > I want to configure varnish with two different backends. But with > my configuration varnish can't handle with both. There is a logic error here: > if (req.url ~"^/partner/") > { > set req.backend = directory1; > set req.http.host = "partnerservicesq00.xxx.de"; > } The above if-clause will be run, and then, regardless of the outcome, the next if-else-clause will be run: > if (req.url ~"^/schaden/") > { > set req.backend = directory2; > set req.http.host = "servicesq00.xxx.de"; > } > else > { > set req.backend = default; > } This means that if the URL matched /partner/ the backend will get set to back to default, because it falls through to the "else". I think you want your second if for /schaden/ to be an elsif. if (req.url ~ "^/partner/") { } elsif (req.url ~ "^/schaden/") { } else { } If that's not the problem you're having, please provide some more information, i.e. backend configuration and error messages if any, or the expected and actual result. From cdgraff at gmail.com Wed Mar 23 04:04:51 2011 From: cdgraff at gmail.com (Alejandro) Date: Wed, 23 Mar 2011 00:04:51 -0300 Subject: VarnishLog: Broken pipe (Debug) In-Reply-To: References: Message-ID: Hi guys, Some one can help with this? I have the same issue in the logs. Regards, Alejandro El 15 de marzo de 2011 10:51, Roberto O. Fern?ndez Crisial < roberto.fernandezcrisial at gmail.com> escribi?: > Hi guys, > > I need some help and I think you can help me. A few days ago I was realized > that Varnish is showing some error messages when debug mode is enable on > varnishlog: > > 4741 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2959 Debug c "Write error, retval = -1, len = 237299, errno = > Broken pipe" > 2591 Debug c "Write error, retval = -1, len = 168289, errno = > Broken pipe" > 3517 Debug c "Write error, retval = -1, len = 114421, errno = > Broken pipe" > > I want to know what are those error messages and why are they generated. > Any suggestion? > > Thank you! > Roberto O. Fern?ndez Crisial > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 04:44:18 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:44:18 -0400 Subject: Child Process Killed Message-ID: The child process got killed abruptly. I am attaching a bunch of munin graphs, relevant syslog, the current varnishstat -1 output. I am running Varnish 2.1.5 on a 64 bit machine with the following command: sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 My VCL is fairly simple, and I think has nothing to do with the error. Any help would be appreciated. -T -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish.munin.tz Type: application/octet-stream Size: 88817 bytes Desc: not available URL: From nadahalli at gmail.com Wed Mar 23 04:46:05 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:46:05 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Resending the other attachments (syslog and varnishstat) -T On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > The child process got killed abruptly. > > I am attaching a bunch of munin graphs, relevant syslog, the current > varnishstat -1 output. > > I am running Varnish 2.1.5 on a 64 bit machine with the following command: > > sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a > 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p > thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p > session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 > > My VCL is fairly simple, and I think has nothing to do with the error. > > Any help would be appreciated. > > -T > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- client_conn 5409469 482.69 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5409469 482.69 Client requests received cache_hit 5358032 478.10 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 51434 4.59 Cache misses backend_conn 51434 4.59 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 51433 4.59 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 200 . N struct sess_mem n_sess 100 . N struct sess n_object 45560 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 45669 . N struct objectcore n_objecthead 45673 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 200 . N worker threads n_wrk_create 200 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 28 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 5763 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 298470 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5409362 482.68 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 5409469 482.69 Total Sessions s_req 5409469 482.69 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 51433 4.59 Total fetch s_hdrbytes 1189049759 106098.85 Total header bytes s_bodybytes 5149727833 459509.93 Total body bytes sess_closed 5409469 482.69 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 226158115 20180.08 SHM records shm_writes 21752857 1941.01 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 27172 2.42 SHM MTX contention shm_cycles 97 0.01 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 102756 9.17 SMA allocator requests sma_nobj 91120 . SMA outstanding allocations sma_nbytes 72897093 . SMA outstanding bytes sma_balloc 82131133 . SMA bytes allocated sma_bfree 9234040 . SMA bytes free sms_nreq 1 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 418 . SMS bytes allocated sms_bfree 418 . SMS bytes freed backend_req 51434 4.59 Backend requests made n_vcl 9 0.00 N vcl total n_vcl_avail 9 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 155 . N total active purges n_purge_add 155 0.01 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 43087 3.84 N objects tested n_purge_re_test 561069 50.06 N regexps tested against n_purge_dups 140 0.01 N duplicate purges removed hcb_nolock 5409434 482.68 HCB Lookups without lock hcb_lock 45671 4.08 HCB Lookups with lock hcb_insert 45671 4.08 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11207 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) -------------- next part -------------- Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858414] python invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858420] python cpuset=/ mems_allowed=0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858424] Pid: 5766, comm: python Not tainted 2.6.32-305-ec2 #9-Ubuntu Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858426] Call Trace: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858436] [] ? cpuset_print_task_mems_allowed+0x8c/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858442] [] oom_kill_process+0xe3/0x210 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858445] [] __out_of_memory+0x50/0xb0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858448] [] out_of_memory+0x5f/0xc0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858451] [] __alloc_pages_slowpath+0x4c1/0x560 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858455] [] __alloc_pages_nodemask+0x171/0x180 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858458] [] __do_page_cache_readahead+0xd7/0x220 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858461] [] ra_submit+0x1c/0x20 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858464] [] filemap_fault+0x3fe/0x450 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858468] [] __do_fault+0x50/0x680 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858470] [] handle_mm_fault+0x260/0x4f0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858476] [] do_page_fault+0x147/0x390 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858479] [] page_fault+0x28/0x30 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858481] Mem-Info: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858483] DMA per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858484] CPU 0: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858486] CPU 1: hi: 0, btch: 1 usd: 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858487] DMA32 per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858489] CPU 0: hi: 155, btch: 38 usd: 146 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858491] CPU 1: hi: 155, btch: 38 usd: 178 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858492] Normal per-cpu: Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858493] CPU 0: hi: 155, btch: 38 usd: 136 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858495] CPU 1: hi: 155, btch: 38 usd: 43 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858499] active_anon:1561108 inactive_anon:312311 isolated_anon:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858500] active_file:133 inactive_file:251 isolated_file:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] unevictable:0 dirty:9 writeback:0 unstable:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858501] free:10533 slab_reclaimable:711 slab_unreclaimable:7610 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858503] mapped:104 shmem:46 pagetables:0 bounce:0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858508] DMA free:16384kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:16160kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858513] lowmem_reserve[]: 0 4024 7559 7559 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858519] DMA32 free:19904kB min:5916kB low:7392kB high:8872kB active_anon:3246376kB inactive_anon:649464kB active_file:0kB inactive_file:448kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:4kB writeback:0kB mapped:164kB shmem:16kB slab_reclaimable:212kB slab_unreclaimable:5428kB kernel_stack:112kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:59 all_unreclaimable? yes Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858524] lowmem_reserve[]: 0 0 3534 3534 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858530] Normal free:5844kB min:5196kB low:6492kB high:7792kB active_anon:2998056kB inactive_anon:599780kB active_file:532kB inactive_file:556kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3619728kB mlocked:0kB dirty:32kB writeback:0kB mapped:252kB shmem:168kB slab_reclaimable:2632kB slab_unreclaimable:25012kB kernel_stack:2272kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:672 all_unreclaimable? no Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858534] lowmem_reserve[]: 0 0 0 0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858536] DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 4*4096kB = 16384kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858543] DMA32: 2942*4kB 1*8kB 0*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 19904kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858549] Normal: 471*4kB 3*8kB 6*16kB 2*32kB 3*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 5844kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858555] 477 total pagecache pages Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858557] 0 pages in swap cache Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858559] Swap cache stats: add 0, delete 0, find 0/0 Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858560] Free swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.858561] Total swap = 0kB Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882870] 1968128 pages RAM Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882873] 61087 pages reserved Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882874] 1106 pages shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882875] 1894560 pages non-shared Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882878] Out of memory: kill process 1491 (varnishd) score 2838972 or a child Mar 23 00:35:06 ip-10-116-105-253 kernel: [1541993.882892] Killed process 1492 (varnishd) Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (1492) died signal=9 Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child cleanup complete Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: child (21675) Started Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Mar 23 00:35:07 ip-10-116-105-253 varnishd[1491]: Child (21675) said Child starts From nadahalli at gmail.com Wed Mar 23 04:48:35 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Tue, 22 Mar 2011 23:48:35 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I am running my Python origin-server on the same machine. It seems like the Python interpreter caused the OOM killer to kill Varnish. If that's the case, is there anything I can do prevent this from happening? -T On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > Resending the other attachments (syslog and varnishstat) > > -T > > > On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: > >> The child process got killed abruptly. >> >> I am attaching a bunch of munin graphs, relevant syslog, the current >> varnishstat -1 output. >> >> I am running Varnish 2.1.5 on a 64 bit machine with the following command: >> >> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >> >> My VCL is fairly simple, and I think has nothing to do with the error. >> >> Any help would be appreciated. >> >> -T >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadahalli at gmail.com Wed Mar 23 05:27:48 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 00:27:48 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: I found a couple of other threads involving the OOM killer. http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-April/002722.html http://www.varnish-cache.org/lists/pipermail/varnish-misc/2009-June/002838.html In both these cases, they had quite a few purge requests which added purge records which never got expired and that might have caused the out of control memory growth. I have a similar situation - with purges happening every 15 minutes. Mar 22 06:31:35 ip-10-116-105-253 varnishd[1491]: CLI telnet 127.0.0.1 60642 127.0.0.1 2000 Rd purge req.url ~ ^/\\?idsite=18&url=http%3A%2F% 2Fwww.people.com%2Fpeople%2Farticle%2F These are essentially the 'same' purges that are fired every 15 minutes. Do I have to setup the ban lurker to avoid out of control memory growth? -T On Tue, Mar 22, 2011 at 11:48 PM, Tejaswi Nadahalli wrote: > I am running my Python origin-server on the same machine. It seems like the > Python interpreter caused the OOM killer to kill Varnish. If that's the > case, is there anything I can do prevent this from happening? > > -T > > > On Tue, Mar 22, 2011 at 11:46 PM, Tejaswi Nadahalli wrote: > >> Resending the other attachments (syslog and varnishstat) >> >> -T >> >> >> On Tue, Mar 22, 2011 at 11:44 PM, Tejaswi Nadahalli wrote: >> >>> The child process got killed abruptly. >>> >>> I am attaching a bunch of munin graphs, relevant syslog, the current >>> varnishstat -1 output. >>> >>> I am running Varnish 2.1.5 on a 64 bit machine with the following >>> command: >>> >>> sudo varnishd -f /etc/varnish/default.vcl -s malloc,5G -T 127.0.0.1:2000-a >>> 0.0.0.0:80 -p thread_pools=2 -p thread_pool_min=100 -p >>> thread_pool_max=5000 -p thread_pool_add_delay=2 -p cli_timeout=25 -p >>> session_linger=100 -p lru_interval=20 -p listen_depth=4096 -t 31536000 >>> >>> My VCL is fairly simple, and I think has nothing to do with the error. >>> >>> Any help would be appreciated. >>> >>> -T >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 23 08:11:52 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 23 Mar 2011 08:11:52 +0100 Subject: Child Process Killed In-Reply-To: (Tejaswi Nadahalli's message of "Tue, 22 Mar 2011 23:48:35 -0400") References: Message-ID: <874o6uz4xz.fsf@qurzaw.varnish-software.com> ]] Tejaswi Nadahalli | I am running my Python origin-server on the same machine. It seems like the | Python interpreter caused the OOM killer to kill Varnish. If that's the | case, is there anything I can do prevent this from happening? Add more memory, don't leak memory in your python process, limit the amount of memory varnish uses, add swap or change the oom score for varnish? -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From andrea.campi at zephirworks.com Wed Mar 23 16:08:06 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Wed, 23 Mar 2011 16:08:06 +0100 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated Message-ID: Hi, I am currently working with a client to implement ESI + gzip with trunk Varnish; since phk asked for help in breaking it, here we are :) Some background: the customer is a publishing company and we are working on the website for their daily newspaper, so ease of integration with their CMS and timely expiration of ESI fragments is paramount. Because of this, I'm using the classic technique of having the page esi:include a document with very short TTL, that in turn esi:includes the real fragment (that has a long TTL), including in the URL the last-modification TTL. So we have something like: index.shtml -> /includes2010/header.esi/homepage -> /includes2010/header.shtml/homepage This works nicely when I strip the Accept-Encoding header, on both 2.1.5 and trunk. But it breaks down with gzip compression on: Safari and Chrome give up at the point where the first ESI include is, Firefox mostly just errors out; all of them sometimes provide vague errors. The best info I have is from: "curl | zip" gzip: out: invalid compressed data--format violated Unsetting bereq.http.accept-encoding on the first ESI request didn't help; unsetting it on the second request *did* help, fixing the issue for all browsers. Setting TTL=0 for /includes2010/header.shtml/homepage didn't make a difference, nor did changing vcl_recv to return(pass), so it seems it's not a matter of what is stored in the cache. [.... a couple of hours later ....] Long story short, I finally realized the problem is not with Varnish per se, but with the office proxy (Squid 2.7.STABLE9); it seems to corrupt the gzip stream just after the 00 00 FF FF sequence: -0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc +0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc -0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 +0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 and so on. However, what I wrote above is still true: if I only have one level of ESI include, or if I have two but the inner one is not originally gzip, Squid doesn't corrupt the content. I have a few gzipped files, as well as sample vcl and html files (not that these matter after all), I can send them if those would help. Andrea From ronan at iol.ie Wed Mar 23 17:25:43 2011 From: ronan at iol.ie (Ronan Mullally) Date: Wed, 23 Mar 2011 16:25:43 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <871v2fwizs.fsf@qurzaw.varnish-software.com> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > | The current load is light (up to a couple of thousand active sessions, > | peak is around five thousand). Varnish has a fairly simple config with > | a director consisting of two Apache backends: > > This looks a bit odd: > > | backend backend1 { > | .host = "1.2.3.4"; > | .port = "80"; > | .connect_timeout = 5s; > | .first_byte_timeout = 90s; > | .between_bytes_timeout = 90s; > | A typical request is below. The first attempt fails with: > | > | 33 FetchError c http first read error: -1 0 (Success) > > This just means the backend closed the connection on us. > > | there is presumably a restart and the second attempt (sometimes to > | backend1, sometimes backend2) fails with: > | > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > This is a timeout, however: > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > That 2.89s backend response time doesn't add up with your timeouts. Can > you see if you can get a tcpdump of what's going on? Varnishlog and output from TCP for a typical occurance is below. If you need any further details let me know. -Ronan 16 ReqStart c 202.168.71.170 39173 403520520 16 RxRequest c POST 16 RxURL c /ajax.php 16 RxProtocol c HTTP/1.1 16 RxHeader c Via: 1.1 APSRVMY35001 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 16 RxHeader c Referer: http://www.redcafe.net/f8/ 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 16 RxHeader c Host: www.redcafe.net 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Accept-Language: en-gb,en;q=0.5 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 RxHeader c Keep-Alive: 115 16 RxHeader c X-Requested-With: XMLHttpRequest 16 RxHeader c Pragma: no-cache 16 RxHeader c Cache-Control: no-cache 16 RxHeader c Connection: Keep-Alive 16 RxHeader c Content-Length: 82 16 VCL_call c recv 16 VCL_return c pass 16 VCL_call c hash 16 VCL_return c hash 16 VCL_call c pass 16 VCL_return c pass 16 Backend c 53 redcafe redcafe1 53 TxRequest b POST 53 TxURL b /ajax.php 53 TxProtocol b HTTP/1.1 53 TxHeader b Via: 1.1 APSRVMY35001 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 53 TxHeader b Referer: http://www.redcafe.net/f8/ 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 53 TxHeader b Host: www.redcafe.net 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 53 TxHeader b Accept-Language: en-gb,en;q=0.5 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 53 TxHeader b X-Requested-With: XMLHttpRequest 53 TxHeader b Pragma: no-cache 53 TxHeader b Cache-Control: no-cache 53 TxHeader b Content-Length: 82 53 TxHeader b X-Forwarded-For: 202.168.71.170 53 TxHeader b X-Varnish: 403520520 16 FetchError c http first read error: -1 0 (Success) 53 BackendClose b redcafe1 16 Backend c 52 redcafe redcafe2 52 TxRequest b POST 52 TxURL b /ajax.php 52 TxProtocol b HTTP/1.1 52 TxHeader b Via: 1.1 APSRVMY35001 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... 52 TxHeader b Referer: http://www.redcafe.net/f8/ 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 52 TxHeader b Host: www.redcafe.net 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 52 TxHeader b Accept-Language: en-gb,en;q=0.5 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 52 TxHeader b X-Requested-With: XMLHttpRequest 52 TxHeader b Pragma: no-cache 52 TxHeader b Cache-Control: no-cache 52 TxHeader b Content-Length: 82 52 TxHeader b X-Forwarded-For: 202.168.71.170 52 TxHeader b X-Varnish: 403520520 16 FetchError c backend write error: 11 (Resource temporarily unavailable) 52 BackendClose b redcafe2 16 VCL_call c error 16 VCL_return c deliver 16 VCL_call c deliver 16 VCL_return c deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Retry-After: 0 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Content-Length: 2623 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT 16 TxHeader c X-Varnish: 403520520 16 TxHeader c Age: 2 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: close 16 Length c 2623 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 16 SessionClose c error 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 First attempt (redcafe1 backend) 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ 0x0030: 101a 178d .... 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke 0x04b0: 6e n 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c 0x0030: 101a 178d .... 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... 0x0030: 0c9d 9863 ...c Second attempt (redcafe2 backend) 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u ... 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W 0x0030: 1019 bc2d ...- 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... 0x0030: 0c9d 9a57 ...W From phk at phk.freebsd.dk Wed Mar 23 19:24:26 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 23 Mar 2011 18:24:26 +0000 Subject: Nested ESI + gzip + Squid 2.7.STABLE9 = invalid compressed data--format violated In-Reply-To: Your message of "Wed, 23 Mar 2011 16:08:06 +0100." Message-ID: <13858.1300904666@critter.freebsd.dk> In message , Andr ea Campi writes: >Long story short, I finally realized the problem is not with Varnish >per se, but with the office proxy (Squid 2.7.STABLE9); it seems to >corrupt the gzip stream just after the 00 00 FF FF sequence: > >-0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff ec 3d db 72 dc >+0004340 5d 90 4a 4e 4e 00 00 00 00 ff ff 00 3d db 72 dc > >-0024040 75 21 aa 39 01 00 00 00 ff ff d4 59 db 52 23 39 >+0024040 75 21 aa 39 01 00 00 00 ff ff 00 59 db 52 23 39 > >and so on. We found a similar issue in ngnix last week: A 1 byte chunked encoding get zap'ed to 0x00 just like what you show. Are you sure there is no ngnix instance involved ? It would be weird of both squid and ngnix has the same bug ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From TFigueiro at au.westfield.com Wed Mar 23 21:33:06 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 07:33:06 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: From: Tejaswi Nadahalli > I am running my Python origin-server on the same machine. It seems like > the Python interpreter caused the OOM killer to kill Varnish. If that's > the case, is there anything I can do prevent this from happening? I've been meaning to write-up a blog entry regarding the OOM killer in Linux (what a dumb idea) but in the mean time this should get you started. The OOM Killer is there because Linux, by default in most distros, allocates more memory than available (swap+ram) on the assumption that applications will never need it (this is called overcommiting). Mostly this is true but when it's not the oom_kill is called to free-up some memory so the kernel can keep its promise. Usually it does a shit job (as you just noticed) and I hate it so much. One way to solve this is to tweak oom_kill so it doesn't kill varnish processes. It's a bit cumbersome because you need to do that based on the PID, which you only learn after the process has started, leaving room for some nifty race conditions. Still, adding these to Varnish's init scripts should do what you need - look up online for details. The other way is to disable memory overcommit. Add to /etc/sysctl.conf: # Disables memory overcommit vm.overcommit_memory = 2 # Tweak to fool VM (read manual for setting above) vm.overcommit_ratio = 100 # swap only if really needed vm.swappiness = 10 and sudo /sbin/sysctl -e -p /etc/sysctl.conf The problem with setting overcommit_memory to 2 is that the VM will not allocate more memory than you have available (the actual rule is a function of RAM, swap and overcommit_ratio, hence the tweak above). This could be a problem for Varnish depending on the storage used. The file storage will mmap the file, resulting in a VM size as large as the file. If you don't have enough RAM the kernel will deny memory allocation and varnish will fail to start. At this point you either buy more RAM or tweak your swap size to account for greedy processes (ie.: processes that allocate a lot of memory but never use it). TL;DR: buy more memory; get rid of memory hungry scripts in your varnish box Good luck. ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nadahalli at gmail.com Wed Mar 23 22:10:20 2011 From: nadahalli at gmail.com (Tejaswi Nadahalli) Date: Wed, 23 Mar 2011 17:10:20 -0400 Subject: Child Process Killed In-Reply-To: References: Message-ID: Thanks for the detailed explanation of why the OOM Killer strikes. I have dome some reading about it, and am tinkering with how to stop it from killing varnishd. What I am curious about is - how did the OOM killer get invoked at all? My python process is fairly basic, and wouldn't have consumed any memory at all. When varnish reaches it's malloc limit, I thought cached objects would start getting Nuked. My LRU nuke counters were 0 through the process. So, instead of nuking objects gracefully, I had a varnish-child-restart. This is what I am worried about. If I can get nuking by reducing the overall memory footprint by reducing the malloc limits, I will gladly do it. Do you think that might help? -T On Wed, Mar 23, 2011 at 4:33 PM, Thiago Figueiro wrote: > From: Tejaswi Nadahalli > > I am running my Python origin-server on the same machine. It seems like > > the Python interpreter caused the OOM killer to kill Varnish. If that's > > the case, is there anything I can do prevent this from happening? > > > I've been meaning to write-up a blog entry regarding the OOM killer in > Linux (what a dumb idea) but in the mean time this should get you started. > > The OOM Killer is there because Linux, by default in most distros, > allocates more memory than available (swap+ram) on the assumption that > applications will never need it (this is called overcommiting). Mostly this > is true but when it's not the oom_kill is called to free-up some memory so > the kernel can keep its promise. Usually it does a shit job (as you just > noticed) and I hate it so much. > > One way to solve this is to tweak oom_kill so it doesn't kill varnish > processes. It's a bit cumbersome because you need to do that based on the > PID, which you only learn after the process has started, leaving room for > some nifty race conditions. Still, adding these to Varnish's init scripts > should do what you need - look up online for details. > > The other way is to disable memory overcommit. Add to /etc/sysctl.conf: > > # Disables memory overcommit > vm.overcommit_memory = 2 > # Tweak to fool VM (read manual for setting above) > vm.overcommit_ratio = 100 > # swap only if really needed > vm.swappiness = 10 > > and sudo /sbin/sysctl -e -p /etc/sysctl.conf > > The problem with setting overcommit_memory to 2 is that the VM will not > allocate more memory than you have available (the actual rule is a function > of RAM, swap and overcommit_ratio, hence the tweak above). > > This could be a problem for Varnish depending on the storage used. The > file storage will mmap the file, resulting in a VM size as large as the > file. If you don't have enough RAM the kernel will deny memory allocation > and varnish will fail to start. At this point you either buy more RAM or > tweak your swap size to account for greedy processes (ie.: processes that > allocate a lot of memory but never use it). > > > TL;DR: buy more memory; get rid of memory hungry scripts in your varnish > box > > > Good luck. > > > ______________________________________________________ > CONFIDENTIALITY NOTICE > This electronic mail message, including any and/or all attachments, is for > the sole use of the intended recipient(s), and may contain confidential > and/or privileged information, pertaining to business conducted under the > direction and supervision of the sending organization. All electronic mail > messages, which may have been established as expressed views and/or opinions > (stated either within the electronic mail message or any of its > attachments), are left to the sole responsibility of that of the sender, and > are not necessarily attributed to the sending organization. Unauthorized > interception, review, use, disclosure or distribution of any such > information contained within this electronic mail message and/or its > attachment(s), is (are) strictly prohibited. If you are not the intended > recipient, please contact the sender by replying to this electronic mail > message, along with the destruction all copies of the original electronic > mail message (along with any attachments). > ______________________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From TFigueiro at au.westfield.com Thu Mar 24 03:39:24 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Thu, 24 Mar 2011 13:39:24 +1100 Subject: Child Process Killed In-Reply-To: References: Message-ID: > Do you think that might help? You're looking for /proc/PID/oom_score; here, read this: http://lwn.net/Articles/317814/ Reducing memory usage will help, yes. And what Tollef said in his reply is the practical approach: add ram and/or swap. At some point the sum of the RESIDENT processes memory size is bigger than SWAP+RAM, and this is what triggers oom_kill. The other way around is what you suggested yourself: reduce memory usage. G'luck ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From nfn at gmx.com Fri Mar 25 10:47:33 2011 From: nfn at gmx.com (Nuno Neves) Date: Fri, 25 Mar 2011 09:47:33 +0000 Subject: Using cron to purge cache Message-ID: <20110325094733.232990@gmx.com> Hello, I have a file named varnish-purge with this content that it's executed daily by cron, but the objects remain in the cache, even when I run it manually. -------------------------------------------------------------------------------------------- #!/bin/sh /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" -------------------------------------------------------------------------------------------- The cron file is: ----------------------------------- #!/bin/sh /usr/local/bin/varnish-purge ----------------------------------- I alread used: /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge '.*' and /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 url.purge . without succes. The only way to purge the cache is restarting varnish. I'm using vanish 2.1.5 from http://repo.varnish-cache.org http://repo.varnish-cache.org/debian/GPG-key.txt Any guidance will be apreciated. Thanks Nuno -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Mar 25 10:54:47 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 25 Mar 2011 10:54:47 +0100 Subject: Using cron to purge cache In-Reply-To: <20110325094733.232990@gmx.com> References: <20110325094733.232990@gmx.com> Message-ID: Hi Nuno. On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: > Hello, > > I have a file named varnish-purge with this content that it's executed daily > by cron, but the objects remain in the cache, even when I run it manually. > -------------------------------------------------------------------------------------------- > #!/bin/sh > > /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" url.purge will create what we call a "ban", or a filter. Think of it as a lazy purge. The objects will remain in memory but killed during lookup. If you want to kill the objects from cache you'd have to set up the ban lurker to walk the objects and expunge them. If you want the objects to actually disappear from memory right away you would have to do a HTTP PURGE call, and setting the TTL to zero, but that means you'd have to kill off every URL in cache. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From ronan at iol.ie Fri Mar 25 11:12:54 2011 From: ronan at iol.ie (Ronan Mullally) Date: Fri, 25 Mar 2011 10:12:54 +0000 (GMT) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: I am still encountering this problem - about 1% on average of POSTs are failing with a 503 when there is no problem apparent on the back-ends. GETs are not affected: Hour GETs Fails POSTs Fails 00:00 38060 0 (0.00%) 480 2 (0.42%) 01:00 34051 0 (0.00%) 412 0 (0.00%) 02:00 29881 0 (0.00%) 383 2 (0.52%) 03:00 25741 0 (0.00%) 374 1 (0.27%) 04:00 22296 0 (0.00%) 326 2 (0.61%) 05:00 22594 0 (0.00%) 349 20 (5.73%) 06:00 31422 0 (0.00%) 408 6 (1.47%) 07:00 58746 0 (0.00%) 656 6 (0.91%) 08:00 74307 0 (0.00%) 870 4 (0.46%) 09:00 87386 0 (0.00%) 1280 8 (0.62%) 10:00 51744 0 (0.00%) 741 8 (1.08%) 11:00 50060 0 (0.00%) 825 1 (0.12%) 12:00 58573 0 (0.00%) 664 5 (0.75%) 13:00 60548 0 (0.00%) 735 7 (0.95%) 14:00 60242 0 (0.00%) 875 8 (0.91%) 15:00 61427 0 (0.00%) 778 3 (0.39%) 16:00 66480 0 (0.00%) 810 4 (0.49%) 17:00 65749 0 (0.00%) 836 12 (1.44%) 18:00 64312 0 (0.00%) 732 3 (0.41%) 19:00 60930 0 (0.00%) 652 5 (0.77%) 20:00 59646 0 (0.00%) 626 1 (0.16%) 21:00 61218 0 (0.00%) 674 3 (0.45%) 22:00 55908 0 (0.00%) 598 3 (0.50%) 23:00 45173 0 (0.00%) 560 1 (0.18%) There was another poster on this thread with the same problem which suggests a possible varnish problem rather than anything specific to my setup. Does anybody have any ideas? -Ronan On Wed, 23 Mar 2011, Ronan Mullally wrote: > On Thu, 10 Mar 2011, Tollef Fog Heen wrote: > > > | I'm seeing intermittant 503s on POSTs to a fairly busy VBulletin website. > > | The current load is light (up to a couple of thousand active sessions, > > | peak is around five thousand). Varnish has a fairly simple config with > > | a director consisting of two Apache backends: > > > > This looks a bit odd: > > > > | backend backend1 { > > | .host = "1.2.3.4"; > > | .port = "80"; > > | .connect_timeout = 5s; > > | .first_byte_timeout = 90s; > > | .between_bytes_timeout = 90s; > > | A typical request is below. The first attempt fails with: > > | > > | 33 FetchError c http first read error: -1 0 (Success) > > > > This just means the backend closed the connection on us. > > > > | there is presumably a restart and the second attempt (sometimes to > > | backend1, sometimes backend2) fails with: > > | > > | 33 FetchError c backend write error: 11 (Resource temporarily unavailable) > > > > This is a timeout, however: > > > > | 33 ReqEnd c 657185708 1299604110.559967279 1299604113.447372913 0.000037670 2.887368441 0.000037193 > > > > That 2.89s backend response time doesn't add up with your timeouts. Can > > you see if you can get a tcpdump of what's going on? > > Varnishlog and output from TCP for a typical occurance is below. If you > need any further details let me know. > > > -Ronan > > 16 ReqStart c 202.168.71.170 39173 403520520 > 16 RxRequest c POST > 16 RxURL c /ajax.php > 16 RxProtocol c HTTP/1.1 > 16 RxHeader c Via: 1.1 APSRVMY35001 > 16 RxHeader c Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 16 RxHeader c Referer: http://www.redcafe.net/f8/ > 16 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 16 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 16 RxHeader c Host: www.redcafe.net > 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 16 RxHeader c Accept-Language: en-gb,en;q=0.5 > 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 16 RxHeader c Keep-Alive: 115 > 16 RxHeader c X-Requested-With: XMLHttpRequest > 16 RxHeader c Pragma: no-cache > 16 RxHeader c Cache-Control: no-cache > 16 RxHeader c Connection: Keep-Alive > 16 RxHeader c Content-Length: 82 > 16 VCL_call c recv > 16 VCL_return c pass > 16 VCL_call c hash > 16 VCL_return c hash > 16 VCL_call c pass > 16 VCL_return c pass > 16 Backend c 53 redcafe redcafe1 > 53 TxRequest b POST > 53 TxURL b /ajax.php > 53 TxProtocol b HTTP/1.1 > 53 TxHeader b Via: 1.1 APSRVMY35001 > 53 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 53 TxHeader b Referer: http://www.redcafe.net/f8/ > 53 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 53 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 53 TxHeader b Host: www.redcafe.net > 53 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 53 TxHeader b Accept-Language: en-gb,en;q=0.5 > 53 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 53 TxHeader b X-Requested-With: XMLHttpRequest > 53 TxHeader b Pragma: no-cache > 53 TxHeader b Cache-Control: no-cache > 53 TxHeader b Content-Length: 82 > 53 TxHeader b X-Forwarded-For: 202.168.71.170 > 53 TxHeader b X-Varnish: 403520520 > 16 FetchError c http first read error: -1 0 (Success) > 53 BackendClose b redcafe1 > 16 Backend c 52 redcafe redcafe2 > 52 TxRequest b POST > 52 TxURL b /ajax.php > 52 TxProtocol b HTTP/1.1 > 52 TxHeader b Via: 1.1 APSRVMY35001 > 52 TxHeader b Cookie: __utma=216440341.583483764.1291872570.1299827399.1300063501.123; __utmz=216440341.1292919894.10.2.... > 52 TxHeader b Referer: http://www.redcafe.net/f8/ > 52 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > 52 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 > 52 TxHeader b Host: www.redcafe.net > 52 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 52 TxHeader b Accept-Language: en-gb,en;q=0.5 > 52 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 52 TxHeader b X-Requested-With: XMLHttpRequest > 52 TxHeader b Pragma: no-cache > 52 TxHeader b Cache-Control: no-cache > 52 TxHeader b Content-Length: 82 > 52 TxHeader b X-Forwarded-For: 202.168.71.170 > 52 TxHeader b X-Varnish: 403520520 > 16 FetchError c backend write error: 11 (Resource temporarily unavailable) > 52 BackendClose b redcafe2 > 16 VCL_call c error > 16 VCL_return c deliver > 16 VCL_call c deliver > 16 VCL_return c deliver > 16 TxProtocol c HTTP/1.1 > 16 TxStatus c 503 > 16 TxResponse c Service Unavailable > 16 TxHeader c Server: Varnish > 16 TxHeader c Retry-After: 0 > 16 TxHeader c Content-Type: text/html; charset=utf-8 > 16 TxHeader c Content-Length: 2623 > 16 TxHeader c Date: Mon, 14 Mar 2011 11:16:11 GMT > 16 TxHeader c X-Varnish: 403520520 > 16 TxHeader c Age: 2 > 16 TxHeader c Via: 1.1 varnish > 16 TxHeader c Connection: close > 16 Length c 2623 > 16 ReqEnd c 403520520 1300101369.629967451 1300101371.923255682 0.000078917 2.293243885 0.000044346 > 16 SessionClose c error > 16 StatSess c 202.168.71.170 39173 2 1 1 0 1 0 235 2623 > > First attempt (redcafe1 backend) > > 2011-03-14 11:16:09.892897 IP 193.27.1.46.22809 > 193.27.1.44.80: . ack 2433 win 91 > 0x0000: 4500 0034 d23d 4000 4006 e3f5 c11b 012e E..4.=@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8010 005b 35a1 0000 0101 080a 0c9d 985b ...[5..........[ > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926678 IP 193.27.1.46.22809 > 193.27.1.44.80: P 400:1549(1149) ack 2433 win 91 > 0x0000: 4500 04b1 d23e 4000 4006 df77 c11b 012e E....>@. at ..w.... > 0x0010: c11b 012c 5919 0050 41f0 4706 4cdf c051 ...,Y..PA.G.L..Q > 0x0020: 8018 005b 8934 0000 0101 080a 0c9d 9863 ...[.4.........c > 0x0030: 101a 178d 504f 5354 202f 616a 6178 2e70 ....POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a73 :.403520520....s > 0x0460: 6563 7572 6974 7974 6f6b 656e 3d31 3330 ecuritytoken=130 > 0x0470: 3030 3937 3737 302d 3539 3938 3235 3061 0097770-5998250a > 0x0480: 6336 6662 3932 3431 6435 3335 3835 6366 c6fb9241d53585cf > 0x0490: 3863 6537 3039 3534 6333 6531 6362 3430 8ce70954c3e1cb40 > 0x04a0: 2664 6f3d 7365 6375 7269 7479 746f 6b65 &do=securitytoke > 0x04b0: 6e n > 2011-03-14 11:16:09.926769 IP 193.27.1.46.22809 > 193.27.1.44.80: F 1549:1549(0) ack 2433 win 91 > 0x0000: 4500 0034 d23f 4000 4006 e3f3 c11b 012e E..4.?@. at ....... > 0x0010: c11b 012c 5919 0050 41f0 4b83 4cdf c051 ...,Y..PA.K.L..Q > 0x0020: 8011 005b 311b 0000 0101 080a 0c9d 9863 ...[1..........c > 0x0030: 101a 178d .... > 2011-03-14 11:16:09.926870 IP 193.27.1.44.80 > 193.27.1.46.22809: . ack 1550 win 36 > 0x0000: 4500 0034 a05f 4000 4006 15d4 c11b 012c E..4._ at .@......, > 0x0010: c11b 012e 0050 5919 4cdf c051 41f0 4b84 .....PY.L..QA.K. > 0x0020: 8010 0024 3140 0000 0101 080a 101a 179f ...$1 at .......... > 0x0030: 0c9d 9863 ...c > > > Second attempt (redcafe2 backend) > > 2011-03-14 11:16:11.923056 IP 193.27.1.46.55567 > 193.27.1.45.80: P 6711:7778(1067) ack 148116 win 757 > 0x0000: 4500 045f b04c 4000 4006 01bb c11b 012e E.._.L at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5730 48af 6a67 ...-...P=.W0H.jg > 0x0020: 8018 02f5 88e3 0000 0101 080a 0c9d 9a57 ...............W > 0x0030: 1019 bc2d 504f 5354 202f 616a 6178 2e70 ...-POST./ajax.p > 0x0040: 6870 2048 5454 502f 312e 310d 0a56 6961 hp.HTTP/1.1..Via > 0x0050: 3a20 312e 3120 4150 5352 564d 5933 3530 :.1.1.APSRVMY350 > 0x0060: 3031 0d0a 436f 6f6b 6965 3a20 5f5f 7574 01..Cookie:.__ut > 0x0070: 6d61 3d32 3136 3434 3033 3431 2e35 3833 ma=216440341.583 > 0x0080: 3438 3337 3634 2e31 3239 3138 3732 3537 483764.129187257 > 0x0090: 302e 3132 3939 3832 3733 3939 2e31 3330 0.1299827399.130 > 0x00a0: 3030 3633 3530 312e 3132 333b 205f 5f75 0063501.123;.__u > 0x00b0: 746d 7a3d 3231 3634 3430 3334 312e 3132 tmz=216440341.12 > 0x00c0: 3932 3931 3938 3934 2e31 302e 322e 7574 92919894.10.2.ut > 0x00d0: 6d63 636e 3d28 6f72 6761 6e69 6329 7c75 mccn=(organic)|u > ... > 0x0230: 3742 692d 3332 3335 3438 5f69 2d31 3330 7Bi-323548_i-130 > 0x0240: 3030 3632 3533 375f 2537 440d 0a52 6566 0062537_%7D..Ref > 0x0250: 6572 6572 3a20 6874 7470 3a2f 2f77 7777 erer:.http://www > 0x0260: 2e72 6564 6361 6665 2e6e 6574 2f66 382f .redcafe.net/f8/ > 0x0270: 0d0a 436f 6e74 656e 742d 5479 7065 3a20 ..Content-Type:. > 0x0280: 6170 706c 6963 6174 696f 6e2f 782d 7777 application/x-ww > 0x0290: 772d 666f 726d 2d75 726c 656e 636f 6465 w-form-urlencode > 0x02a0: 643b 2063 6861 7273 6574 3d55 5446 2d38 d;.charset=UTF-8 > 0x02b0: 0d0a 5573 6572 2d41 6765 6e74 3a20 4d6f ..User-Agent:.Mo > 0x02c0: 7a69 6c6c 612f 352e 3020 2857 696e 646f zilla/5.0.(Windo > 0x02d0: 7773 3b20 553b 2057 696e 646f 7773 204e ws;.U;.Windows.N > 0x02e0: 5420 362e 313b 2065 6e2d 4742 3b20 7276 T.6.1;.en-GB;.rv > 0x02f0: 3a31 2e39 2e32 2920 4765 636b 6f2f 3230 :1.9.2).Gecko/20 > 0x0300: 3130 3031 3135 2046 6972 6566 6f78 2f33 100115.Firefox/3 > 0x0310: 2e36 0d0a 486f 7374 3a20 7777 772e 7265 .6..Host:.www.re > 0x0320: 6463 6166 652e 6e65 740d 0a41 6363 6570 dcafe.net..Accep > 0x0330: 743a 2074 6578 742f 6874 6d6c 2c61 7070 t:.text/html,app > 0x0340: 6c69 6361 7469 6f6e 2f78 6874 6d6c 2b78 lication/xhtml+x > 0x0350: 6d6c 2c61 7070 6c69 6361 7469 6f6e 2f78 ml,application/x > 0x0360: 6d6c 3b71 3d30 2e39 2c2a 2f2a 3b71 3d30 ml;q=0.9,*/*;q=0 > 0x0370: 2e38 0d0a 4163 6365 7074 2d4c 616e 6775 .8..Accept-Langu > 0x0380: 6167 653a 2065 6e2d 6762 2c65 6e3b 713d age:.en-gb,en;q= > 0x0390: 302e 350d 0a41 6363 6570 742d 4368 6172 0.5..Accept-Char > 0x03a0: 7365 743a 2049 534f 2d38 3835 392d 312c set:.ISO-8859-1, > 0x03b0: 7574 662d 383b 713d 302e 372c 2a3b 713d utf-8;q=0.7,*;q= > 0x03c0: 302e 370d 0a58 2d52 6571 7565 7374 6564 0.7..X-Requested > 0x03d0: 2d57 6974 683a 2058 4d4c 4874 7470 5265 -With:.XMLHttpRe > 0x03e0: 7175 6573 740d 0a50 7261 676d 613a 206e quest..Pragma:.n > 0x03f0: 6f2d 6361 6368 650d 0a43 6163 6865 2d43 o-cache..Cache-C > 0x0400: 6f6e 7472 6f6c 3a20 6e6f 2d63 6163 6865 ontrol:.no-cache > 0x0410: 0d0a 436f 6e74 656e 742d 4c65 6e67 7468 ..Content-Length > 0x0420: 3a20 3832 0d0a 582d 466f 7277 6172 6465 :.82..X-Forwarde > 0x0430: 642d 466f 723a 2032 3032 2e31 3638 2e37 d-For:.202.168.7 > 0x0440: 312e 3137 300d 0a58 2d56 6172 6e69 7368 1.170..X-Varnish > 0x0450: 3a20 3430 3335 3230 3532 300d 0a0d 0a :.403520520.... > 2011-03-14 11:16:11.923115 IP 193.27.1.46.55567 > 193.27.1.45.80: F 7778:7778(0) ack 148116 win 757 > 0x0000: 4500 0034 b04d 4000 4006 05e5 c11b 012e E..4.M at .@....... > 0x0010: c11b 012d d90f 0050 3df9 5b5b 48af 6a67 ...-...P=.[[H.jg > 0x0020: 8011 02f5 562f 0000 0101 080a 0c9d 9a57 ....V/.........W > 0x0030: 1019 bc2d ...- > 2011-03-14 11:16:11.923442 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7778 win 137 > 0x0000: 4500 0034 9178 4000 4006 24ba c11b 012d E..4.x at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5b .....P..H.jg=.[[ > 0x0020: 8010 0089 56b4 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > 2011-03-14 11:16:11.923454 IP 193.27.1.45.80 > 193.27.1.46.55567: . ack 7779 win 137 > 0x0000: 4500 0034 9179 4000 4006 24b9 c11b 012d E..4.y at .@.$....- > 0x0010: c11b 012e 0050 d90f 48af 6a67 3df9 5b5c .....P..H.jg=.[\ > 0x0020: 8010 0089 56b3 0000 0101 080a 1019 be15 ....V........... > 0x0030: 0c9d 9a57 ...W > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From mrbits.dcf at gmail.com Fri Mar 25 11:38:51 2011 From: mrbits.dcf at gmail.com (MrBiTs) Date: Fri, 25 Mar 2011 07:38:51 -0300 Subject: Using cron to purge cache Message-ID: <4D8C70BB.8010408@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/25/2011 06:54 , Per Buer wrote: > Hi Nuno. > > On Fri, Mar 25, 2011 at 10:47 AM, Nuno Neves wrote: >> Hello, >> >> I have a file named varnish-purge with this content that it's executed daily >> by cron, but the objects remain in the cache, even when I run it manually. >> -------------------------------------------------------------------------------------------- >> #!/bin/sh >> >> /usr/bin/varnishadm -S /etc/varnish/secret -T 0.0.0.0:6082 "url.purge .*" > > url.purge will create what we call a "ban", or a filter. Think of it > as a lazy purge. The objects will remain in memory but killed during > lookup. If you want to kill the objects from cache you'd have to set > up the ban lurker to walk the objects and expunge them. > > If you want the objects to actually disappear from memory right away > you would have to do a HTTP PURGE call, and setting the TTL to zero, > but that means you'd have to kill off every URL in cache. > I think we can do a nice discussion here. First, and this is a big off-topic here, if I need to purge all contents from time to time, it's better to create a huge webserver structure, to support requests, change the application a little to generate static pages from time to time to not increase the database load and forget about Varnish. But this is discussion to another list, another time. Second, is this recommended ? I mean, purge all URL, all contents in cache will do varnish to request this content again to backend, increasing server load and it can cause problems. What to you guys think about it ? I think it is better to have a purge system (like a message queue or a form to kill some objetcs) to remove only really wanted objects. If you need to purge all varnish contents, why not just restart varnish from time to time ? But, again, all backend issues must be considerated here. - -- LLAP .0. MrBiTs - mrbits.dcf at gmail.com ..0 GnuPG - http://keyserver.fug.com.br:11371/pks/lookup?op=get&search=0x6EC818FC2B3CA5AB 000 http://www.mrbits.com.br -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) iQEcBAEBCAAGBQJNjHC7AAoJEG7IGPwrPKWr3kkH/1zim9haorjg4qbrLeefsyjd chBzbCdNwNUPqjbKW+V0hyw7OZY80boMCfD7ZIWgWd+Dy5kCou01D7qebRGYGHt9 oaSmgNFXISMUwOtZwl4F5uKsKhxH7ZtBdJncomoSz3+Apl9yY3gB0aYYfNoi8YoS btgWsNKBzWQTR2pFUz8dYqumrr0aQU3sQRhqBQ7YU165GnhzBSAOxQuTXwM5Lp+j IPLwfWuPaPdSt5nhueDrovdQqHGctWDjkB2JGpi0M8ALvPHETKIZA5oBMHXuXhXY uURPvOsLm2bFmhzDYG3Zr0sJ81ek4K7T2LXd4yT9uiqisnyd5WjbfTH6XS4keDY= =x2+0 -----END PGP SIGNATURE----- From contact at jpluscplusm.com Fri Mar 25 11:55:14 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 25 Mar 2011 10:55:14 +0000 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: On 21 March 2011 15:08, Jonathan Matthews wrote: > Hi all - > > I've got some long-running squid instances, mainly used for caching > medium-sized binaries, which I'd like to replace with some varnish > instances. ?The binaries are quite heavy to regenerate on the distant > origin servers and there's a large number of them. ?Hence, I'd like to > use the squid cache as a target to warm a (new, nearby) varnish > instance instead of just pointing the varnish instance at the remote > origin servers. > > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. ?I've looked around for people trying the > same thing, but haven't come across any success stories. ?I'm > perfectly prepared to be told that I simply have to reconfigure the > squid instances in mixed proxy/origin-server mode, and that there's no > way around it, but I thought I'd ask the list for guidance first ... > > Any thoughts? Anyone? All opinions welcome ... :-) -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From lampe at hauke-lampe.de Sat Mar 26 18:05:03 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Sat, 26 Mar 2011 18:05:03 +0100 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D8E1CBF.6060802@hauke-lampe.de> On 25.03.2011 11:55, Jonathan Matthews wrote: > The squid instances are running in proxy mode, and require (I > *believe*) an HTTP CONNECT. Do they really? I would think squid just pipes a CONNECT request wihout caching the contents, just like varnish does. I'm not quite sure about that, though. What I *think* you need to do is to rewrite the request URL so that it contains the hostname. An incoming request like this. | GET /foo | Host: example.com should be passed to squid in this form: | GET http://example.com/foo In VCL: set req.backend = squid; if (req.url ~ "^/" && req.http.Host) { set req.url = "http://" req.http.Host req.url; unset req.http.Host; } Hauke. From iliakan at gmail.com Sat Mar 26 21:12:48 2011 From: iliakan at gmail.com (Ilia Kantor) Date: Sat, 26 Mar 2011 23:12:48 +0300 Subject: Current sessions count Message-ID: Hello, How can I get a count of current Varnish sessions from inside VCL? Inline C will do. Approximate will do. I need it to enable DDOS protections if the count of current connects exceeds given constant. Maybe there is a VSL_stats field? -- --- Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Mon Mar 28 12:52:13 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Mon, 28 Mar 2011 12:52:13 +0200 Subject: obj.ttl not available in vcl_deliver In-Reply-To: References: Message-ID: <20110328105213.GA9172@localhost.localdomain> On Mon, Mar 21, 2011 at 10:39:50PM -0400, AD wrote: > On Mon, Mar 21, 2011 at 10:30 PM, Ken Brownfield wrote: > > > Per lots of posts on this list, obj is now baresp in newer Varnish > > versions. It sounds like the documentation for this change hasn't been > > fully propagated. Small clarification (which should go into the docs, somewhere): obj.* still exists. beresp is the backend response which you can modify in vcl_fetch. Upon exiting vcl_fetch, beresp is used to allocate space for and populate the obj-structure. The only part of obj.* that is available in vcl_deliver is obj.hits. What you can do is store the ttl on req.http somewhere (assuming the conversions work) in vcl_hit, then copy it onto resp.* in vcl_deliver. - Kristian From johnson at nmr.mgh.harvard.edu Mon Mar 28 13:23:41 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 07:23:41 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 Message-ID: Hi, Currently running 2.0.5. Has been working so well as a rule we just forgot about it. Would like to update to 2.1.5 because 2.0.5 hung up last week. I saw mention of hang bug in 2.0.5 but this is the first time we've felt it. I made a change to the config a while back to prevent double caching on a server name alternate name. Question, this is a plug n' play, yes? I can just install the new RPM and it will take off were it was stopped? No config differences that are applicable? That would be bloody awesome. If the is anything that will cause I problem I'd like to know about it before the update. Want the server down as short a time as possible. Tnx. ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |"Life is chaos. Chaos is life. Control is an Charlestown, MA., 02129 USA | illusion." Trance Gemini ------------------------------------------------------------------------------- The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. From s.welschhoff at lvm.de Mon Mar 28 13:24:36 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Mon, 28 Mar 2011 13:24:36 +0200 Subject: localhost Message-ID: Hello, I am very new in varnish. I try to get a return code 200 when varnish opens the default backend. The default backend will be localhost. Is it possible? Thanks for your help. Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 20769 bytes Desc: not available URL: From samcrawford at gmail.com Mon Mar 28 17:06:35 2011 From: samcrawford at gmail.com (Sam Crawford) Date: Mon, 28 Mar 2011 16:06:35 +0100 Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: It'd probably be wise to spin up a 2.1.5 instance of Varnish on a development server using your production VCL. If it parses it okay and starts, then you should be fine. The only change that may catch you out is that obj.* changed to beresp.* from 2.0.x to 2.1.x. Thanks, Sam On 28 March 2011 12:23, Chris Johnson wrote: > ? ? Hi, > > ? ? Currently running 2.0.5. ?Has been working so well as a rule we > just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung > up last week. ?I saw mention of hang bug in 2.0.5 but this is the first > time we've felt it. > > ? ? I made a change to the config a while back to prevent double > caching on a server name alternate name. ?Question, this is a plug n' play, > yes? ?I can just install the new RPM and it will take off were it was > stopped? ?No config differences that are applicable? ?That would be bloody > awesome. ?If the is anything that will cause I problem I'd like > to know about it before the update. ?Want the server down as short a > time as possible. ?Tnx. > > ------------------------------------------------------------------------------- > Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu > Systems Administrator ? ? ? |Web: > ?http://www.nmr.mgh.harvard.edu/~johnson > NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 > Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 > 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an > Charlestown, MA., 02129 USA | illusion." ?Trance Gemini > ------------------------------------------------------------------------------- > > > The information in this e-mail is intended only for the person to whom it is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From mhettwer at team.mobile.de Mon Mar 28 17:24:51 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Mon, 28 Mar 2011 16:24:51 +0100 Subject: localhost In-Reply-To: Message-ID: On 28.03.11 13:24, "Stefan Welschhoff" wrote: >Hello, Hi there, > >I am very new in varnish. I try to get a return code 200 when varnish >opens the default backend. The default backend will be localhost. Is it >possible? Short answer: Yes, if your backend behaves well. Little longer answer: If you configure a backend like that: backend default { .host = "127.0.0.1"; .port = "8080"; } And assuming that your backend really listens on localhost:8080, then use the following in vcl_recv: sub vcl_recv { set req.backend = default; } Now start varnish, and assuming you let varnish listen on localhost:80 you can do something like that wget -0 /dev/null -q -S http://localhost/foo.txt The request GET /foo.txt goes to varnish and he forwards this to your backend at localhost:8080. If "wget -0 /dev/null -q -S http://localhost:8080/foo.txt" works, then "wget -0 /dev/null -q -S http://localhost/foo.txt" will work too. Cheers, Marian PS.: Start with the fine documentation of varnish! From johnson at nmr.mgh.harvard.edu Mon Mar 28 17:54:23 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 28 Mar 2011 11:54:23 -0400 (EDT) Subject: 2.0.5 -> 2.1.5 In-Reply-To: References: Message-ID: Well my config has the following in vcl fetch sub vcl_fetch { if (!obj.cacheable) { return (pass); } if (req.url ~ "^/fswiki") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (req.url ~ "^/wiki/fswiki_htdocs") { unset req.http.Set-Cookie; set obj.ttl = 600s; } if (obj.http.Set-Cookie) { return (pass); } set obj.prefetch = -30s; return (deliver); } But it's isolated. Presumably the 2.1.5 has its own. > It'd probably be wise to spin up a 2.1.5 instance of Varnish on a > development server using your production VCL. If it parses it okay and > starts, then you should be fine. > > The only change that may catch you out is that obj.* changed to > beresp.* from 2.0.x to 2.1.x. > > Thanks, > > Sam > > > On 28 March 2011 12:23, Chris Johnson wrote: >> ? ? Hi, >> >> ? ? Currently running 2.0.5. ?Has been working so well as a rule we >> just forgot about it. ?Would like to update to 2.1.5 because 2.0.5 hung >> up last week. ?I saw mention of hang bug in 2.0.5 but this is the first >> time we've felt it. >> >> ? ? I made a change to the config a while back to prevent double >> caching on a server name alternate name. ?Question, this is a plug n' play, >> yes? ?I can just install the new RPM and it will take off were it was >> stopped? ?No config differences that are applicable? ?That would be bloody >> awesome. ?If the is anything that will cause I problem I'd like >> to know about it before the update. ?Want the server down as short a >> time as possible. ?Tnx. >> >> ------------------------------------------------------------------------------- >> Chris Johnson ? ? ? ? ? ? ? |Internet: johnson at nmr.mgh.harvard.edu >> Systems Administrator ? ? ? |Web: >> ?http://www.nmr.mgh.harvard.edu/~johnson >> NMR Center ? ? ? ? ? ? ? ? ?|Voice: ? ?617.726.0949 >> Mass. General Hospital ? ? ?|FAX: ? ? ?617.726.7422 >> 149 (2301) 13th Street ? ? ?|"Life is chaos. ?Chaos is life. ?Control is an >> Charlestown, MA., 02129 USA | illusion." ?Trance Gemini >> ------------------------------------------------------------------------------- >> >> >> The information in this e-mail is intended only for the person to whom it is >> addressed. If you believe this e-mail was sent to you in error and the >> e-mail >> contains patient information, please contact the Partners Compliance >> HelpLine at >> http://www.partners.org/complianceline . If the e-mail was sent to you in >> error >> but does not contain patient information, please contact the sender and >> properly >> dispose of the e-mail. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |I know a place where dreams are born and time Charlestown, MA., 02129 USA | is never planned. Neverland ------------------------------------------------------------------------------- From info at songteksten.nl Tue Mar 29 14:56:15 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 14:56:15 +0200 Subject: Mobile redirect question Message-ID: <1301403375.2060.20.camel@maikel-laptop> Hi, I'm using currently the following code to do a mobile site redirect. I found it somewhere on the internet. if ( req.http.user-agent ~ "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" && req.http.host ~ "(www.example.nl|www.example.be)" ) { set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", "http://m.\2"); error 750 req.http.newhost; The redirect from www.example.nl to m.example.nl works perfectly, only www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou the page.php?id=1 part). Is it possible to change the redirect so it also includes the rest of the url? Thanks, Maikel From bjorn at ruberg.no Tue Mar 29 15:04:33 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Tue, 29 Mar 2011 15:04:33 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91D8E1.9020905@ruberg.no> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? You don't show how error 750 is handled in your VCL, so it's a bit hard to tell how to improve your current config. However, the following URL should get you going: http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping -- Bj?rn From info at songteksten.nl Tue Mar 29 15:13:00 2011 From: info at songteksten.nl (Maikel - Songteksten.nl) Date: Tue, 29 Mar 2011 15:13:00 +0200 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <1301404380.2060.21.camel@maikel-laptop> The redirect looks like this: sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 302; return(deliver); } } Maikel On Tue, 2011-03-29 at 15:04 +0200, Bj?rn Ruberg wrote: > On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > > Hi, > > > > I'm using currently the following code to do a mobile site redirect. I > > found it somewhere on the internet. > > > > if ( req.http.user-agent ~ > > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > > > && req.http.host ~ "(www.example.nl|www.example.be)" > > > > ) { > > > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > > "http://m.\2"); > > error 750 req.http.newhost; > > > > The redirect from www.example.nl to m.example.nl works perfectly, only > > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > > the page.php?id=1 part). > > > > Is it possible to change the redirect so it also includes the rest of > > the url? > > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping > From bjorn at ruberg.no Tue Mar 29 15:16:17 2011 From: bjorn at ruberg.no (=?UTF-8?B?QmrDuHJuIFJ1YmVyZw==?=) Date: Tue, 29 Mar 2011 15:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301404380.2060.21.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> <1301404380.2060.21.camel@maikel-laptop> Message-ID: <4D91DBA1.9060108@ruberg.no> On 29. mars 2011 15:13, Maikel - Songteksten.nl wrote: > The redirect looks like this: > > sub vcl_error { > if (obj.status == 750) { > set obj.http.Location = obj.response; > set obj.status = 302; > return(deliver); > } > } You should still take a look at the URL I mentioned. And please don't top-post. -- Bj?rn From scaunter at topscms.com Tue Mar 29 15:53:41 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 29 Mar 2011 09:53:41 -0400 Subject: Mobile redirect question In-Reply-To: <4D91D8E1.9020905@ruberg.no> References: <1301403375.2060.20.camel@maikel-laptop> <4D91D8E1.9020905@ruberg.no> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902C011D20CB@TMG-EVS02.torstar.net> On 29. mars 2011 14:56, Maikel - Songteksten.nl wrote: > Hi, > > I'm using currently the following code to do a mobile site redirect. I > found it somewhere on the internet. > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*ht c.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia. *|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*Son yEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcat el.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoM o.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^K WC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*| ^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.* |^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Win dows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; > > The redirect from www.example.nl to m.example.nl works perfectly, only > www.example.nl/page.php?id=1 also redirect to m.example.nl (so withou > the page.php?id=1 part). > > Is it possible to change the redirect so it also includes the rest of > the url? > You don't show how error 750 is handled in your VCL, so it's a bit hard > to tell how to improve your current config. However, the following URL > should get you going: > http://www.varnish-cache.org/trac/wiki/VCLExampleHostnameRemapping set req.http.newhost = regsub(req.url, "^/(.*)", "http://m.example.ca/\1"); Stef From geoff at uplex.de Tue Mar 29 16:16:17 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 29 Mar 2011 16:16:17 +0200 Subject: Mobile redirect question In-Reply-To: <1301403375.2060.20.camel@maikel-laptop> References: <1301403375.2060.20.camel@maikel-laptop> Message-ID: <4D91E9B1.9080407@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 02:56 PM, Maikel - Songteksten.nl wrote: > > if ( req.http.user-agent ~ > "(.*Android.*|.*Blackberry.*|.*BlackBerry.*|.*Blazer.*|.*Ericsson.*|.*htc.*|.*Huawei.*|.*iPhone.*|.*iPod.*|.*MobilePhone.*|.*Motorola.*|.*nokia.*|.*Novarra.*|.*O2.*|.*Palm.*|.*Samsung.*|.*Sanyo.*|.*Smartphone.*|.*SonyEricsson.*|.*Symbian.*|.*Toshiba.*|.*Treo.*|.*vodafone.*|.*Xda.*|^Alcatel.*|^Amoi.*|^ASUS.*|^Audiovox.*|^AU-MIC.*|^BenQ.*|^Bird.*|^CDM.*|^DoCoMo.*|^dopod.*|^Fly.*|^Haier.*|^HP.*iPAQ.*|^i-mobile.*|^KDDI.*|^KONKA.*|^KWC.*|^Lenovo.*|^LG.*|^NEWGEN.*|^Panasonic.*|^PANTECH.*|^PG.*|^Philips.*|^portalmmm.*|^PPC.*|^PT.*|^Qtek.*|^Sagem.*|^SCH.*|^SEC.*|^Sendo.*|^SGH.*|^Sharp.*|^SIE.*|^SoftBank.*|^SPH.*|^UTS.*|^Vertu.*|.*Opera.Mobi.*|.*Windows.CE.*|^ZTE.*)" > > && req.http.host ~ "(www.example.nl|www.example.be)" > > ) { > > set req.http.newhost = regsub(req.http.host, "(www)?\.(.*)", > "http://m.\2"); > error 750 req.http.newhost; This is not what you asked about, but you're almost certainly losing a lot of performance with that regex. I would suggest that you put the check against req.http.host first (so that it doesn't bother with the pattern match when it doesn't have to), and above all, get rid of the leading and trailing .*'s in the regex. When you match a string against a regex like ".*foobar.*", it first matches the leading .* all the way until the end of the input string, overlooking any instance of "foobar" it sees along the way. Then it starts backtracking until it can match ".*f", then ".*fo", and so on. If it can match ".*foobar", it then takes the trouble to match the trailing .* to the end of the string. This is happening for all of the alternates in your regex until a match is found. phk's advice at VUG was: write your regex so that you can prove as quickly as possible that an input string *doesn't* match it. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkemwAAoJEOUwvh9pJNURLpEP/3t9+FTbTpzXmx9cts+2NOcr VJl4L+bi4+b+Zkn46yMgZjwLOyRWhqYQBfFozqKVOIX204jH0kzuHFqWwkF3luNO B6izenicK6jhQurdUsS4CTJ6j74yCgX1Jks9DC4Z3pLcwwY/swzJsV2ldKx9rqWJ sr6NJv8WxSz1Pb/i5BP6C7veplmO/rdKLZxzll5b7Qic6LicrRGG5ny0exUdysce q2ZlcAXCe7//7Ha7+1wlw5xXb3APcx96SB4bh+ASS63KgHevKkSwPOFFUdv//FzG xLEc/U5MqKjiFErx0IPzPZrD+E2Yf0PIVqRc9L7eL9g5SSJEfqwmFCrHucLYpmpW tpdDepflnUv1p7IkY0boNabds8AhRPAIAtYi6o8+mjGQBtGVdOuQ4SbH2+2OOMLz x3YtAcjUjhArg8gUSjZRPIXfbHHy6vSiYKBPBqJUPmUBRw009VsCNO1F58b1sXJb YVmX6cKwfcq97GFqBBp+CsKEyJsJaubIReXQOoJTRrPVHqqn4aWmYOk1UHQiN5Pw iXNFJQbV/bh0jrgk5W5bcOS+WyvwSQm0aK8SMsHnVY4gh73md6kcD1rybc3S5doC +WEBLMdJWteDOZMQDBVgXXUmwmzHk8eX+6cRQKe4IaXXgRSoGOAZiwy+6G7a3YYk klz7Nm1RM3vs6EmQfvoY =kwSJ -----END PGP SIGNATURE----- From listas at kurtkraut.net Tue Mar 29 22:09:31 2011 From: listas at kurtkraut.net (Kurt Kraut) Date: Tue, 29 Mar 2011 17:09:31 -0300 Subject: How to collect lines from varnishncsa only from a specific domain? Message-ID: Hi, I'm trying to use varnishncsa -c -I to collect the output of varnishncsa concerning a specific domain (e.g.: domain.com). I've attempted the following commands: varnishncsa -c -I *domain.com* varnishncsa -c -I /*domain.com/ And none of them worked. But the following command works: varnishncsa -c | grep domain.com I feel there is something odd with the varnishncsa -I command. How does this work? Thanks in advance, Kurt Kraut -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 04:43:43 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 29 Mar 2011 23:43:43 -0300 Subject: varnish as traffic director Message-ID: hi! is there any way to use varnish to direct my traffic to different backends depending on the requested url? so for example I would have 2 different backends: - search-backend - items-backend if the requested url is /search I want to direct the traffic to search-backend and if the requested url is /items I want to direct the traffic to items-backend is this a common use case for varnish or I am trying to accomplish something that should be done using something else? thanks! jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Wed Mar 30 04:52:35 2011 From: straightflush at gmail.com (AD) Date: Tue, 29 Mar 2011 22:52:35 -0400 Subject: varnish as traffic director In-Reply-To: References: Message-ID: sub vcl_recv { if (req.url ~ "^/search") { set req.backend = search-backend; } elseif (req.url ~ "^/items") { set req.backend = items-backend; } } On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > hi! is there any way to use varnish to direct my traffic to different > backends depending on the requested url? > so for example I would have 2 different backends: > - search-backend > - items-backend > > if the requested url is /search I want to direct the traffic to > search-backend > and if the requested url is /items I want to direct the traffic to > items-backend > > is this a common use case for varnish or I am trying to accomplish > something that should be done using something else? > > thanks! > > jonathan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Wed Mar 30 05:01:13 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Wed, 30 Mar 2011 00:01:13 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! If I have 100 of different rules, I would have a very big if block, right? Is this a common use case for varnish? On Tue, Mar 29, 2011 at 11:52 PM, AD wrote: > sub vcl_recv { > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > On Tue, Mar 29, 2011 at 10:43 PM, Jonathan Leibiusky wrote: > >> hi! is there any way to use varnish to direct my traffic to different >> backends depending on the requested url? >> so for example I would have 2 different backends: >> - search-backend >> - items-backend >> >> if the requested url is /search I want to direct the traffic to >> search-backend >> and if the requested url is /items I want to direct the traffic to >> items-backend >> >> is this a common use case for varnish or I am trying to accomplish >> something that should be done using something else? >> >> thanks! >> >> jonathan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed Mar 30 08:32:12 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 30 Mar 2011 08:32:12 +0200 Subject: 2.0.5 -> 2.1.5 In-Reply-To: (Sam Crawford's message of "Mon, 28 Mar 2011 16:06:35 +0100") References: Message-ID: <87r59pt8yb.fsf@qurzaw.varnish-software.com> ]] Sam Crawford | The only change that may catch you out is that obj.* changed to | beresp.* from 2.0.x to 2.1.x. Regexes also changed from case-insensitive to case-sensitive and we switched from POSIX regexes to PCRE, which might be important as well. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From martin.boer at bizztravel.nl Tue Mar 29 11:56:48 2011 From: martin.boer at bizztravel.nl (Martin Boer) Date: Tue, 29 Mar 2011 11:56:48 +0200 Subject: Warming the cache from an existing squid proxy instance In-Reply-To: References: Message-ID: <4D91ACE0.7020008@bizztravel.nl> Hi Jonathan, What you could do is something like; backend squid_1 { ... } backend backend_1 { ... } director prefer_squid random { .retries = 1; { .backend = squid_1 .weight = 250; } { .backend = backend_1; .weight = 1; } } This will make sure varnish will retrieve data from the squids mostly and gives you the chance to do the migration in your own time. Regards, Martin On 03/25/2011 11:55 AM, Jonathan Matthews wrote: > On 21 March 2011 15:08, Jonathan Matthews wrote: >> Hi all - >> >> I've got some long-running squid instances, mainly used for caching >> medium-sized binaries, which I'd like to replace with some varnish >> instances. The binaries are quite heavy to regenerate on the distant >> origin servers and there's a large number of them. Hence, I'd like to >> use the squid cache as a target to warm a (new, nearby) varnish >> instance instead of just pointing the varnish instance at the remote >> origin servers. >> >> The squid instances are running in proxy mode, and require (I >> *believe*) an HTTP CONNECT. I've looked around for people trying the >> same thing, but haven't come across any success stories. I'm >> perfectly prepared to be told that I simply have to reconfigure the >> squid instances in mixed proxy/origin-server mode, and that there's no >> way around it, but I thought I'd ask the list for guidance first ... >> >> Any thoughts? > Anyone? All opinions welcome ... :-) > From ronny.ostman at apberget.se Wed Mar 30 08:35:10 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 08:35:10 +0200 Subject: Using varnish to cache remote content Message-ID: Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed Mar 30 08:52:03 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 29 Mar 2011 23:52:03 -0700 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Yes, what you describe is impossible. Since all of these requests are handled on the client/browser side, you can't effect them. The only way would be to either A) configure the user to proxy through your varnish for specific domains (ugly), or B) filter the user's DNS and replace flickr.com etc with the IP of your varnish cache (even uglier). Neither is possible with general internet traffic. Not a Varnish thing, but in theory you could modify your backends to rewrite external URLs that they emit as http://your_varnish.com/flickr.com/real_file(instead of http://flickr.com/real_file) and then have Varnish perform cache magick on that rewritten URL. But start talking SSL and it all goes sideways. And this assumes you wanted only to proxy external URLs that your site is emitting. If there's a glimmer of possibility, it's a really ugly glimmer. ;-) -- kb On Tue, Mar 29, 2011 at 23:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really > found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I > want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not > sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" > requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own > backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go > through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Wed Mar 30 08:51:43 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 08:51:43 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: You cannot do that with varnish, or with anything else :) Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Ronny ?stman Sent: Wednesday, March 30, 2011 8:35 AM To: varnish-misc at varnish-cache.org Subject: Using varnish to cache remote content Hello! This might be a stupid question since I've searched alot and haven't really found the answer.. Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great for all content that my backend provides. The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since the request may not go through varnish.. i guess? To further illustrate my question, here's an example of how it might look: GET mydomain.com - Domain: mydomain.com GET main.css - Domain: mydomain.com GET hello.jpg - Domain: static.mydomain.com GET anypicture.png - Domain: flickr.com GET foo.js - Domain: foo.com In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to varnishlog and this makes it hard to set backend depending on host. Am I trying to accomplish something impossible here? Thanks! Regards, Ronny -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronny.ostman at apberget.se Wed Mar 30 09:16:06 2011 From: ronny.ostman at apberget.se (Ronny =?UTF-8?B?w5ZzdG1hbg==?=) Date: Wed, 30 Mar 2011 09:16:06 +0200 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: Thank's for your answers! :) I figured it was more or less impossible. > >If there's a glimmer of possibility, it's a really ugly glimmer. ;-) Luckily ugly workarounds is my speciality! ;) > > But I guess a pretty resonable solution for caching all the content I want from the domains that I control on the same varnish installation is to point e.g. www.mydomain and static.mydomain to my varnish server and "route" traffic using several backends? Regards, Ronny > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Mar 30 09:23:13 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 30 Mar 2011 09:23:13 +0200 Subject: varnish as traffic director In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky wrote: > Thanks! > If I have 100 of different rules, I would have a very big if block, right? > Is this a common use case for varnish? Yes. It's quite common to have a lot of logic. Don't worry about it, VCL is executed at light speed. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From geoff at uplex.de Wed Mar 30 10:06:06 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 30 Mar 2011 10:06:06 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: References: Message-ID: <4D92E46E.8010101@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/29/11 10:09 PM, Kurt Kraut wrote: > > I'm trying to use varnishncsa -c -I to collect the output of varnishncsa > concerning a specific domain (e.g.: domain.com). I've attempted the > following commands: > > varnishncsa -c -I *domain.com* > varnishncsa -c -I /*domain.com/ > > And none of them worked. But the following command works: > > varnishncsa -c | grep domain.com The -I flag for varnishncsa and other tools does regex matching, not globbing with the '*' wildcard. But if it's enough for you to just match the string domain.com, you don't need anything else: varnishncsa -c -I domain.com Just like for grep. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNkuRuAAoJEOUwvh9pJNURuPYQAI51SjWQXEzicJtgjpMW5R7c rk88lflgWqCyo3390qvd3eA2YAW7JIPSsOcFhLwFSb1/OxLHqmn5lIy2y/gTiJbV kd9yoVMomwWyH0vAr3F1L3iW7HwTLMoz/F9nXNBYRYhbAUaAs9ESrEIiPqD3SYP8 Z1Py1DwtEhiVfJ8X3yYEfVEef9B60Zn1Y3czrQ75m+i9mvljMWxCa2kL/IgKVTe+ MlwQA4wPni+qxTJoC5wwZNLh6FHRtl2F6OQUJrm0bBjt97tw8Ul+1DLFUjHCY6vl lPTXQFflqDwaBo4kiPHRgpKHvmFpcwNZokYeZ9bQgB8ds+fJCx4DBI/t40pUCRiB gJT5AKfFCiFyu/HdC4vGMqXrt22wn9yriHUhTI8qnbHRj939wFkoBix3XrjjVKSW 4Ma1kaET3tTJILtz4xhAVhQPOb4HEuoY5otcTrUS+Ix5aEQwsjFsEVkzS7mh8RLc OtNe8t0JEmLm7SdgFSit7RO/i0dPRyL4ih8duB1PIKeJxys8nSIQvODzban7k9Oa rrQVsplPLmHjngUBoDxNkyc1yo7s6OjsVO7seVxjvgOSoWdmgnOEG3oVvF9uZ2Jd tFPl7gfBM6eHR8owLgUscuaQGVbRr00om5Y3RSX/MGqPZwQR8Dy/X6YJI0DIAZfQ Mj2/uaZ7Uw4lHJFRxhkL =Lbj3 -----END PGP SIGNATURE----- From diego.roccia at subito.it Wed Mar 30 10:51:24 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 10:51:24 +0200 Subject: Varnish stuck on most served content Message-ID: <4D92EF0C.40204@subito.it> Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: stats.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: default.vcl URL: From traian.bratucu at eea.europa.eu Wed Mar 30 10:59:08 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Wed, 30 Mar 2011 10:59:08 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. Traian -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia Sent: Wednesday, March 30, 2011 10:51 AM To: varnish-misc at varnish-cache.org Subject: Varnish stuck on most served content Hi Guys, This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: +---------+ +---------+ +---------+ +---------+ | VARNISH | | VARNISH | | VARNISH | | VARNISH | +---------+ +---------+ +---------+ +---------+ | | | | +------------+------------+------------+ | | +------+-+ +--+-----+ | APACHE | | APACHE | +--------+ +--------+ Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. It's really weird, because css should be always cached. I'm running Centos 5.5 64bit and here's my varnish startup parameters: DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T 0.0.0.0:6082 \ -t 604800 \ -u varnish -g varnish \ -s malloc,54G \ -p thread_pool_add_delay=2 \ -p thread_pools=16 \ -p thread_pool_min=50 \ -p thread_pool_max=4000 \ -p listen_depth=4096 \ -p lru_interval=600 \ -hclassic,500009 \ -p log_hashstring=off \ -p shm_workspace=16384 \ -p ping_interval=2 \ -p default_grace=3600 \ -p pipe_timeout=10 \ -p sess_timeout=6 \ -p send_timeout=10" In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. Hope someone can help me Thanks in advance Diego From diego.roccia at subito.it Wed Mar 30 11:10:35 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Wed, 30 Mar 2011 11:10:35 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: <4D92EF0C.40204@subito.it> Message-ID: <4D92F38B.6090900@subito.it> Hi Traian, Thanks for your interest. The problem is that it's a random issue. I noticed it as I'm using some commercial tools (keynote and gomez) to monitor website performances and I notice some out of average point in the scatter time graph. Experiencing it locally is really hard. On 03/30/2011 10:59 AM, Traian Bratucu wrote: > Not sure what you mean by "freeze", but what you need to do is debug the request with "varnishlog". > You need to see what exactly happens when the GET request is received by varnish and whether it is served from cache or varnish tries to fetch from the backends. > > Try " varnishlog -o | grep -A 50 'your.css' " (or something like that) on one of the varnish servers. > > Traian > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Diego Roccia > Sent: Wednesday, March 30, 2011 10:51 AM > To: varnish-misc at varnish-cache.org > Subject: Varnish stuck on most served content > > Hi Guys, > This is my first message in this list, I began working for a new company some months ago and I found this infrastructure: > > +---------+ +---------+ +---------+ +---------+ > | VARNISH | | VARNISH | | VARNISH | | VARNISH | > +---------+ +---------+ +---------+ +---------+ > | | | | > +------------+------------+------------+ > | | > +------+-+ +--+-----+ > | APACHE | | APACHE | > +--------+ +--------+ > > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, running varnish 2.1.6 (Updated from 2.0.5 1 month ago). They're serving content for up to 450Mbit/s during peaks. > > It's happening often that they freeze serving contents. and I noticed a common pattern: the content that get stuck is always one of the most served, like a css or js file, or some component of the page layout, and it never happens to an image part of the content. > > It's really weird, because css should be always cached. > > I'm running Centos 5.5 64bit and here's my varnish startup parameters: > > DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T 0.0.0.0:6082 \ > -t 604800 \ > -u varnish -g varnish \ > -s malloc,54G \ > -p thread_pool_add_delay=2 \ > -p thread_pools=16 \ > -p thread_pool_min=50 \ > -p thread_pool_max=4000 \ > -p listen_depth=4096 \ > -p lru_interval=600 \ > -hclassic,500009 \ > -p log_hashstring=off \ > -p shm_workspace=16384 \ > -p ping_interval=2 \ > -p default_grace=3600 \ > -p pipe_timeout=10 \ > -p sess_timeout=6 \ > -p send_timeout=10" > > In attach there is my vcl and the varnishstat -1 output after a 24h run of 1 of the servers. Do you notice something bad? > > In the meanwhile I'm running through the documentation, but it's for us an high priority issue as we're talking about the production environment and there's not time now to wait for me to completely understand how does varnish work and find out a solution. > > Hope someone can help me > Thanks in advance > Diego > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From pom at dmsp.de Wed Mar 30 11:59:03 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Wed, 30 Mar 2011 11:59:03 +0200 Subject: How to collect lines from varnishncsa only from a specific domain? In-Reply-To: <4D92E46E.8010101@uplex.de> References: <4D92E46E.8010101@uplex.de> Message-ID: <4D92FEE7.2030305@dmsp.de> Am 30.03.2011 10:06, schrieb Geoff Simmons: > On 03/29/11 10:09 PM, Kurt Kraut wrote: >> I'm trying to use varnishncsa -c -I to collect the output of varnishncsa >> concerning a specific domain (e.g.: domain.com). I've attempted the >> following commands: >> >> varnishncsa -c -I *domain.com* >> varnishncsa -c -I /*domain.com/ >> >> And none of them worked. But the following command works: >> >> varnishncsa -c | grep domain.com > The -I flag for varnishncsa and other tools does regex matching, not > globbing with the '*' wildcard. > > But if it's enough for you to just match the string domain.com, you > don't need anything else: > > varnishncsa -c -I domain.com > > Just like for grep. > > > Best, > Geoff I have to support Kurt ;-) Have the same problem (still moved it to the stack of 'unsolved stuff' so far...) varnishncsa -I is either not or at least working strange... Example: aurora ~ # varnishncsa -c XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:16 +0200] "GET http://www.annuna.net/ HTTP/1.0" 200 791 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/StyleSheet.css HTTP/1.0" 200 2876 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" XXX.XXX.XXX.XXX - - [30/Mar/2011:11:48:17 +0200] "GET http://www.annuna.net/img/Logo_Annuna_850x680_weiss.jpg HTTP/1.0" 200 31986 "http://www.annuna.net/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.13) Gecko/20101211 Firefox/3.6.13" aurora ~ # varnishncsa -c -I annuna or aurora ~ # varnishncsa -c -I "^.*annuna.*$" From my understanding it should at least match any line containing the character string "annuna"? But it doesn't... Am I doint it wrong? ^^ Wondering... Stefan -- Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching http://www.dmsp.de From stewsnooze at gmail.com Wed Mar 30 08:40:49 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Wed, 30 Mar 2011 07:40:49 +0100 Subject: Using varnish to cache remote content In-Reply-To: References: Message-ID: <191AFFC3-B260-40FB-9AD7-CABBBF9F4E8B@gmail.com> Hi, You can only cache items where the DNS record for those sites points at the server/infrastructure where you are running Varnish. You could do something crazy like have flickr.mydomain.com referenced in your HTML pages which is configured in Varnish to use flickr.com as a backend. Personally I think this is a bit strange but it is possible. You need to think about why you are caching external stuff in Varnish and whether you are allowed to? Stew On 30 Mar 2011, at 07:35, Ronny ?stman wrote: > Hello! > > This might be a stupid question since I've searched alot and haven't really found the answer.. > > Anyway, I have a varnish set up caching requests to my backend the way I want it to and it works great > for all content that my backend provides. > > The problem I am having is caching content from remote sources.. I'm not sure if this is really possible since > the request may not go through varnish.. i guess? > > To further illustrate my question, here's an example of how it might look: > > GET mydomain.com - Domain: mydomain.com > GET main.css - Domain: mydomain.com > GET hello.jpg - Domain: static.mydomain.com > GET anypicture.png - Domain: flickr.com > GET foo.js - Domain: foo.com > > In this example, is it possible to have my varnish cache those "remote" requests as well? I can set up backends > for those remote domains and force varnish to use them instead of my own backend but I can't seem to find a > way to have varnish do this "dynamically". The requests doesnt seem to go through my varnish according to > varnishlog and this makes it hard to set backend depending on host. > > Am I trying to accomplish something impossible here? > > Thanks! > > Regards, > Ronny > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampe at hauke-lampe.de Wed Mar 30 18:27:00 2011 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 30 Mar 2011 18:27:00 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D92EF0C.40204@subito.it> References: <4D92EF0C.40204@subito.it> Message-ID: <4D9359D4.6080102@hauke-lampe.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30.03.2011 10:51, Diego Roccia wrote: > Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, > running varnish 2.1.6 (Updated from 2.0.5 1 month ago). varnish 2.1.6 hasn't been released, yet, AFAIK. > It's happening often that they freeze serving contents. and I noticed a > common pattern: the content that get stuck is always one of the most > served, like a css or js file, or some component of the page layout, Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does it eventually deliver the content after the same period of time? There was a bug in 2.1.4 that could lead to the symptoms you describe. If the client sent an If-Modified-Since: header and the backend returned a 304 response, varnish would wait on the backend connection until "first_byte_timeout" elapsed. In that case, the following VCL code helps: sub vcl_pass { unset bereq.http.if-modified-since; unset bereq.http.if-none-match; } http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl See also this thread: http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 Hauke. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 =BTfX -----END PGP SIGNATURE----- From diego.roccia at subito.it Thu Mar 31 10:33:02 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Thu, 31 Mar 2011 10:33:02 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D9359D4.6080102@hauke-lampe.de> References: <4D92EF0C.40204@subito.it> <4D9359D4.6080102@hauke-lampe.de> Message-ID: <4D943C3E.5070903@subito.it> On 03/30/2011 06:27 PM, Hauke Lampe wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > On 30.03.2011 10:51, Diego Roccia wrote: > >> Varnish servers are HP DL360 G6 with 66Gb RAM and 4 Quad-Core Xeon CPUs, >> running varnish 2.1.6 (Updated from 2.0.5 1 month ago). > > varnish 2.1.6 hasn't been released, yet, AFAIK. Sorry, I meant varnish-2.1.5 (SVN 0843d7a), the version from official rpm repository > >> It's happening often that they freeze serving contents. and I noticed a >> common pattern: the content that get stuck is always one of the most >> served, like a css or js file, or some component of the page layout, > > Do you run 2.1.4 or 2.1.5? Is the "freeze" a constant timeout, i.e. does > it eventually deliver the content after the same period of time? Doesn't seems to be a constant time. the same varnish provides tens of elements per page, and sometimes it gets stuck on one of them. It's always a css or js. There are no rules in the vcl specific to these kind of files. so the only common pattern I see is that they're the files it has to serve most times. > There was a bug in 2.1.4 that could lead to the symptoms you describe. > If the client sent an If-Modified-Since: header and the backend returned > a 304 response, varnish would wait on the backend connection until > "first_byte_timeout" elapsed. > I don't think it's receiving the If-Modified-Since , as we're talking about website monitoring tools, and they are configured to start cache cleared every time. > In that case, the following VCL code helps: > > sub vcl_pass { > unset bereq.http.if-modified-since; > unset bereq.http.if-none-match; > } > http://cfg.openchaos.org/varnish/vcl/common/bug_workaround_2.1.4_304.vcl > > See also this thread: > http://www.gossamer-threads.com/lists/varnish/misc/17155#17155 > > > > Hauke. > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAk2TWc8ACgkQKIgAG9lfHFOU1wCgkr0TwZZoJQz7CQ5vdCgryENP > 4HIAn0W0qG2K63vnkHDNA1ZMRGElIE30 > =BTfX > -----END PGP SIGNATURE----- From mhettwer at team.mobile.de Thu Mar 31 11:15:06 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 10:15:06 +0100 Subject: Varnish stuck on most served content In-Reply-To: <4D92F38B.6090900@subito.it> Message-ID: Hi Diego, Please try to avoid top posting. On 30.03.11 11:10, "Diego Roccia" wrote: >Hi Traian, > Thanks for your interest. The problem is that it's a random issue. I >noticed it as I'm using some commercial tools (keynote and gomez) to >monitor website performances and I notice some out of average point in >the scatter time graph. Experiencing it locally is really hard. We are using gomez to let them monitor some of our important pages from remote. What we usually do if we see spikes is, to dig in and find out were the time is spent. In your example, if it's gomez, click in and check. Is it first byte time? DNS time? Content delivery time? With regards to how to debug that. I second the question to the list. My usual procedure in a setup of Apache-->Tomcat-->SomeBackends, I'll go and dig into the access logs of all components and try to figure out who is spending the time to deliver. However, with varnish in front of apaches, I usually don't have a logfile which tells me "varnish thinks it spend xx ms to deliver this request". I know that theres varnishncsa, but I dunno whether it logs away the processing time of a request (%D in Apache LogFormat IIRC). You might try and really use varnishlog to log away requests to js and css files. However, you might grow some really huge files there... Hard to parse them ;) >> >> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >> >> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >> -f ${VARNISH_VCL_CONF} \ >> -T 0.0.0.0:6082 \ >> -t 604800 \ >> -u varnish -g varnish \ >> -s malloc,54G \ >> -p thread_pool_add_delay=2 \ >> -p thread_pools=16 \ >> -p thread_pool_min=50 \ >> -p thread_pool_max=4000 \ >> -p listen_depth=4096 \ >> -p lru_interval=600 \ >> -hclassic,500009 \ >> -p log_hashstring=off \ >> -p shm_workspace=16384 \ >> -p ping_interval=2 \ >> -p default_grace=3600 \ >> -p pipe_timeout=10 \ >> -p sess_timeout=6 \ >> -p send_timeout=10" Hu. What are all those "-p" parameters? Looks like some heavy tweaking to me. Perhaps some varnish gurus might shime in, but to me tuning like that sounds like trouble. Unless you really know what you did there. I wouldn't (not without the documentation at hands). Cheers, Marian From geoff at uplex.de Thu Mar 31 11:40:24 2011 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 31 Mar 2011 11:40:24 +0200 Subject: Varnish stuck on most served content In-Reply-To: References: Message-ID: <4D944C08.7040804@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/31/11 11:15 AM, Hettwer, Marian wrote: >>> >>> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >>> >>> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>> -f ${VARNISH_VCL_CONF} \ >>> -T 0.0.0.0:6082 \ >>> -t 604800 \ >>> -u varnish -g varnish \ >>> -s malloc,54G \ >>> -p thread_pool_add_delay=2 \ >>> -p thread_pools=16 \ >>> -p thread_pool_min=50 \ >>> -p thread_pool_max=4000 \ >>> -p listen_depth=4096 \ >>> -p lru_interval=600 \ >>> -hclassic,500009 \ >>> -p log_hashstring=off \ >>> -p shm_workspace=16384 \ >>> -p ping_interval=2 \ >>> -p default_grace=3600 \ >>> -p pipe_timeout=10 \ >>> -p sess_timeout=6 \ >>> -p send_timeout=10" > > Hu. What are all those "-p" parameters? Looks like some heavy tweaking to > me. > Perhaps some varnish gurus might shime in, but to me tuning like that > sounds like trouble. > Unless you really know what you did there. > > I wouldn't (not without the documentation at hands). Um. Many of those -p's are roughly in the ranges recommended on the Wiki performance page, and on Kristian Lyngstol's blog. http://www.varnish-cache.org/trac/wiki/Performance http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/ Perhaps one of the settings is causing a problem, but it isn't wrong to be doing it all -- and it's quite necessary on a high-traffic site. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNlEwIAAoJEOUwvh9pJNUR0tYP/2M9LpET5mj3OQiMu2Bym1JD iTn2eckasyQRwPzXvDhCZNFRHJDV8aO2wUZ3XqMFsty05FKgPUhoLgJZ9wAoaBXZ oVVr34G4b33SFlVAxfvskHrEp83F0cY5Gb6W/JP2Oj/SzpHEM3elT+8tTFXjgngB F463EiGcikdSdQ5PaMGfTva9JZP6QI+K0IYW4walCPSsz829yQ6I6e5eIDCECiFq BhJMcXdvATWOHg5LfcRUOlcQJJFPl0mzT/Y2zq/hgdImjZ5NLU87xhjFD8twKOVZ Rju8u2Cz6Pl9HHNyVTV5W2fNmIE3J1o972JseHz4wFNoEJQtzTtyEGADE2u2bXH9 Blbor4J1bmERUSyFvH9Brhe1+4Rs5IOtGFCGrEzpxtY+QiqCIkdxmCCl5/EhQlRl eJZMkN3eaXvGgrHHASxM7e2UoIFm0XrBJW5N01Bu6dA/EH6jLowwEmU6OeLkKUSF DLIgAeKt1ECrVU23b9zFfiZSQwMTKB7KumrJoeDrUtSuWVIWdz83thaD0MI6ucxD 62CIPkR7W5zDxSDQ0A/AnXrkZpe8sLP9Z/DgcHA8rSX39zqxJae44OnU56fU07zc 440P+GeT6j5MoKAa1gCxSDAVr7MnDB3B82Y8fZaUFWB1rT1tI/B7VB5dhVwFoEi2 ucD3QwucEs7bpLrKyiwo =3vMe -----END PGP SIGNATURE----- From dan at retrobadger.net Thu Mar 31 13:39:51 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 12:39:51 +0100 Subject: Learning C and VRT_SetHdr() Message-ID: <4D946807.8010900@retrobadger.net> I would like to do some more advanced functionality within our VCL file, and to do so need to use the inline-c capabilities of varnish. So, to start off with, I thought I would get my VCL file to set the headers, so I can test variables, and be sure it is working. But am getting a WSOD when I impliment my seemingly simple code. So my main questions are: * Are there any good docs on the VRT variables * Are there examples/tutorials on using C within the VCL (for beginners to the subject) * Is there something obvious I have overlooked when writing my code The snipper from my current code is: /sub detectmobile {/ / C{/ / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end);/ / }C/ /}/ /sub vcl_miss {/ / call detectmobile;/ / return(fetch);/ /}/ /sub vcl_pipe {/ / call detectmobile;/ / return(pipe);/ /}/ /sub vcl_pass {/ / call detectmobile;/ / return(pass);/ /}/ Thanks for your advice, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Thu Mar 31 13:58:26 2011 From: straightflush at gmail.com (AD) Date: Thu, 31 Mar 2011 07:58:26 -0400 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: use -C to display the default VCL , or just put in a command you want to do in C inside the vcl and then see how it looks when running -C -f against the config. On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: > I would like to do some more advanced functionality within our VCL file, > and to do so need to use the inline-c capabilities of varnish. So, to start > off with, I thought I would get my VCL file to set the headers, so I can > test variables, and be sure it is working. But am getting a WSOD when I > impliment my seemingly simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for beginners to > the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > *sub detectmobile {* > * C{* > * VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);* > * }C* > *}* > *sub vcl_miss {* > * call detectmobile;* > * return(fetch);* > *}* > *sub vcl_pipe {* > * call detectmobile;* > * return(pipe);* > *}* > *sub vcl_pass {* > * call detectmobile;* > * return(pass);* > *}* > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at streppone.it Thu Mar 31 13:58:54 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Thu, 31 Mar 2011 22:58:54 +1100 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D946807.8010900@retrobadger.net> References: <4D946807.8010900@retrobadger.net> Message-ID: On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > The snipper from my current code is: Is it correct to do this in all vcl_miss, pipe and pass? What about vcl_hit then? I would have expected this to happen in vcl_deliver() or vcl_fetch() if you want your backends to see the header you're setting. Anyway... > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ I believe you have a problem in the "\020" bit. That is octal notation. "X-Whatever:" is 11 bytes, including the ':', so you need to write: "\013X-Whatever:" Have fun, -- Cosimo From ttischler at homeaway.com Wed Mar 30 20:46:43 2011 From: ttischler at homeaway.com (Tim Tischler) Date: Wed, 30 Mar 2011 13:46:43 -0500 Subject: varnish as traffic director In-Reply-To: Message-ID: We first started using varnish for caching during a superbowl advertisement, and then when we no longer needed the caching, we keep using it as our load balancer. We're now using it as a A/B testing system between static builds with a number of different rules. We've written a ruby DSL to generate the common rules and inject the the GUID hashes that uniquely identify the A vs. the B builds. We are also routing path prefixes to various additional applications. I've been extremely happy with the speed, the stability, and the flexibility of varnish as a load balancer/content switch, even without caching. -T On 3/30/11 2:23 AM, "Per Buer" wrote: > On Wed, Mar 30, 2011 at 5:01 AM, Jonathan Leibiusky > wrote: >> Thanks! >> If I have 100 of different rules, I would have a very big if block, right? >> Is this a common use case for varnish? > > Yes. It's quite common to have a lot of logic. Don't worry about it, > VCL is executed at light speed. From mhettwer at team.mobile.de Thu Mar 31 14:48:42 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 13:48:42 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: Hej there, On 30.03.11 04:52, "AD" wrote: >sub vcl_recv { > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > elseif (req.url ~ "^/items") { > set req.backend = items-backend; > } > >} By the way, would it also be okay to write it like that? sub vcl_recv { set req.backend = catchall-backend; if (req.url ~ "^/search") { set req.backend = search-backend; } if (req.url ~ "^/items") { set req.backend = items-backend; } } Obviously with the addition of the catchall-backend. Cheers, Marian From rtshilston at gmail.com Thu Mar 31 14:53:49 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Thu, 31 Mar 2011 13:53:49 +0100 Subject: varnish as traffic director In-Reply-To: References: Message-ID: > > On 30.03.11 04:52, "AD" wrote: > >> sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } Logically it's ok. But, it's probably slightly better in terms of efficiency to use an elseif pattern. This is you'll do the first pattern match (/search) on every request, and then you'll also do the pattern match for items, even if you'd already matched to /search. Two pattern matches rather than one is undesirable, and even more so if you ended up having lots and lots of matches to do. Rob From ionathan at gmail.com Thu Mar 31 14:59:33 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Thu, 31 Mar 2011 09:59:33 -0300 Subject: varnish as traffic director In-Reply-To: References: Message-ID: Thanks! It is great to know about real life implementations. Do you have any good way to test rules in your dev env? Is there any benchmark on varnish vs. nginx in regard of load balancing? On 3/31/11, Hettwer, Marian wrote: > Hej there, > > > > > On 30.03.11 04:52, "AD" wrote: > >>sub vcl_recv { >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> elseif (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >>} > > By the way, would it also be okay to write it like that? > > sub vcl_recv { > > set req.backend = catchall-backend; > > > if (req.url ~ "^/search") { > set req.backend = search-backend; > } > if (req.url ~ "^/items") { > set req.backend = items-backend; > } > > } > > > Obviously with the addition of the catchall-backend. > > Cheers, > Marian > > From mhettwer at team.mobile.de Thu Mar 31 15:06:09 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:06:09 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:53, "Robert Shilston" wrote: >> >> On 30.03.11 04:52, "AD" wrote: >> >>> sub vcl_recv { >>> if (req.url ~ "^/search") { >>> set req.backend = search-backend; >>> } >>> elseif (req.url ~ "^/items") { >>> set req.backend = items-backend; >>> } >>> >>> } >> >> By the way, would it also be okay to write it like that? >> >> sub vcl_recv { >> >> set req.backend = catchall-backend; >> >> >> if (req.url ~ "^/search") { >> set req.backend = search-backend; >> } >> if (req.url ~ "^/items") { >> set req.backend = items-backend; >> } >> >> } > > >Logically it's ok. But, it's probably slightly better in terms of >efficiency to use an elseif pattern. This is you'll do the first pattern >match (/search) on every request, and then you'll also do the pattern >match for items, even if you'd already matched to /search. Two pattern >matches rather than one is undesirable, and even more so if you ended up >having lots and lots of matches to do. Understood! Thanks for your explanation :-) Cheers, Marian From mhettwer at team.mobile.de Thu Mar 31 15:09:22 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 31 Mar 2011 14:09:22 +0100 Subject: varnish as traffic director In-Reply-To: Message-ID: On 31.03.11 14:59, "Jonathan Leibiusky" wrote: >Thanks! It is great to know about real life implementations. >Do you have any good way to test rules in your dev env? >Is there any benchmark on varnish vs. nginx in regard of load balancing? Some Real-Life numbers. We have 4 varnishes deployed in front of a big german classifieds site. Each varnish is doing 1200 requests/second and according to munin, each machine is nearly idle. (cpu load at 4% out of 800%). Hardware is HP blades with 8 cores and 8 gig ram. I wouldn't bother to try out nginx. If nginx is in the same league like varnish, I probably couldn't spot a difference anyway ;-) Besides, I'm really happy with varnish. Sorry, no real-life infos about nginx here... Regards, Marian From dan at retrobadger.net Thu Mar 31 15:25:13 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:25:13 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D9480B9.4060101@retrobadger.net> On 31/03/11 12:58, Cosimo Streppone wrote: > On Thu, 31 Mar 2011 22:39:51 +1100, Dan wrote: > >> The snipper from my current code is: > > Is it correct to do this in all > vcl_miss, pipe and pass? > What about vcl_hit then? > > I would have expected this to happen in vcl_deliver() > or vcl_fetch() if you want your backends to see > the header you're setting. > > Anyway... > >> /sub detectmobile {/ >> / C{/ >> / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end);/ >> / }C/ > > I believe you have a problem in the "\020" bit. > That is octal notation. > > "X-Whatever:" is 11 bytes, including the ':', > so you need to write: > > "\013X-Whatever:" > > Have fun, > Sadly no luck with that, I have ammended my code as recommended. Varnish is still able to restart without errors, but WSOD on page load. My custom function is now something: sub detectmobile { C{ VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", vrt_magic_string_end); }C } And the only occurance of 'call detectmobile;' is in: sub vcl_deliver {} Are there any libraries required for the VRT scripts to work? Do I need to alter the /etc/varnish/default file for C to work in varnish? From dan at retrobadger.net Thu Mar 31 15:28:21 2011 From: dan at retrobadger.net (Dan) Date: Thu, 31 Mar 2011 14:28:21 +0100 Subject: Learning C and VRT_SetHdr() In-Reply-To: References: <4D946807.8010900@retrobadger.net> Message-ID: <4D948175.3000503@retrobadger.net> On 31/03/11 12:58, AD wrote: > use -C to display the default VCL , or just put in a command you want > to do in C inside the vcl and then see how it looks when running -C -f > against the config. > > > On Thu, Mar 31, 2011 at 7:39 AM, Dan > wrote: > > I would like to do some more advanced functionality within our VCL > file, and to do so need to use the inline-c capabilities of > varnish. So, to start off with, I thought I would get my VCL file > to set the headers, so I can test variables, and be sure it is > working. But am getting a WSOD when I impliment my seemingly > simple code. > > > So my main questions are: > * Are there any good docs on the VRT variables > * Are there examples/tutorials on using C within the VCL (for > beginners to the subject) > * Is there something obvious I have overlooked when writing my code > > > The snipper from my current code is: > /sub detectmobile {/ > / C{/ > / VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end);/ > / }C/ > /}/ > /sub vcl_miss {/ > / call detectmobile;/ > / return(fetch);/ > /}/ > /sub vcl_pipe {/ > / call detectmobile;/ > / return(pipe);/ > /}/ > /sub vcl_pass {/ > / call detectmobile;/ > / return(pass);/ > /}/ > > > Thanks for your advice, > Dan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish file? Is this required to use inline-c within my vcl file? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Thu Mar 31 20:09:24 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 31 Mar 2011 11:09:24 -0700 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D9480B9.4060101@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D9480B9.4060101@retrobadger.net> Message-ID: On Thu, Mar 31, 2011 at 06:25, Dan wrote: > Sadly no luck with that, I have ammended my code as recommended. Varnish > is still able to restart without errors, but WSOD on page load. My custom > function is now something: > The length of your header is 20 characters including the colon. 013 is the length (in octal) of the X-Whatever: example provided to explain this to you, it is not octal for 20. Replace 013 with 024 to avoid segfaults. There are docs covering this on the website, BTW. What on earth is a WSOD? -- kb > > sub detectmobile { > C{ > VRT_SetHdr(sp, HDR_BEREQ, "\013X-Varnish-TeraWurfl:", "no1", > vrt_magic_string_end); > }C > } > > And the only occurance of 'call detectmobile;' is in: > sub vcl_deliver {} > > Are there any libraries required for the VRT scripts to work? > > Do I need to alter the /etc/varnish/default file for C to work in varnish? > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thebog at gmail.com Thu Mar 31 21:46:33 2011 From: thebog at gmail.com (thebog) Date: Thu, 31 Mar 2011 21:46:33 +0200 Subject: Learning C and VRT_SetHdr() In-Reply-To: <4D948175.3000503@retrobadger.net> References: <4D946807.8010900@retrobadger.net> <4D948175.3000503@retrobadger.net> Message-ID: I think he meant -C in the commandline. Not inside the file. YS Anders Berg On Thu, Mar 31, 2011 at 3:28 PM, Dan wrote: > On 31/03/11 12:58, AD wrote: > > use -C to display the default VCL , or just put in a command you want to do > in C inside the vcl and then see how it looks when running -C -f against the > config. > > On Thu, Mar 31, 2011 at 7:39 AM, Dan wrote: >> >> I would like to do some more advanced functionality within our VCL file, >> and to do so need to use the inline-c capabilities of varnish.? So, to start >> off with, I thought I would get my VCL file to set the headers, so I can >> test variables, and be sure it is working.? But am getting a WSOD when I >> impliment my seemingly simple code. >> >> >> So my main questions are: >> * Are there any good docs on the VRT variables >> * Are there examples/tutorials on using C within the VCL (for beginners to >> the subject) >> * Is there something obvious I have overlooked when writing my code >> >> >> The snipper from my current code is: >> sub detectmobile { >> ? C{ >> ??? VRT_SetHdr(sp, HDR_BEREQ, "\020X-Varnish-TeraWurfl:", "no1", >> vrt_magic_string_end); >> ? }C >> } >> sub vcl_miss { >> ? call detectmobile; >> ? return(fetch); >> } >> sub vcl_pipe { >> ? call detectmobile; >> ? return(pipe); >> } >> sub vcl_pass { >> ? call detectmobile; >> ? return(pass); >> } >> >> >> Thanks for your advice, >> Dan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Sorry, I am confused, where would I put -C, in my /etc/default/varnish > file?? Is this required to use inline-c within my vcl file? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >