From math0ne at gmail.com Wed Oct 8 20:49:30 2014 From: math0ne at gmail.com (Aidan McQuay) Date: Wed, 8 Oct 2014 13:49:30 -0500 Subject: Varnish Consultant Message-ID: Hey there, We're looking for a consultant who can work with us on an hourly basis in the next few days on a sticky issue with our configuration. If you feel up to the task, you can reach me at math0ne at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kelvin1111111 at gmail.com Thu Oct 9 16:58:16 2014 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Thu, 9 Oct 2014 22:58:16 +0800 Subject: [Varnish 4] Custom Error Page from Backend Message-ID: I used to be able to do it in Varnish 3, but it's no longer an easy task in Varnish 4 (or my weary mind stops me from figuring out). How can I customized the error page (403, 404, 500, etc) from backend and send to end users? I prefer not to let the public to view my application error content. In Varnish 3 I am able to return the error() and specify the customized error message in vcl_error. However in Varnish 4 I can't find any way to return the synth() in vcl_backend_response and vcl_backend_error and specify the custom message in vcl_synth(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Thu Oct 9 17:45:08 2014 From: perbu at varnish-software.com (Per Buer) Date: Thu, 9 Oct 2014 17:45:08 +0200 Subject: [Varnish 4] Custom Error Page from Backend In-Reply-To: References: Message-ID: On Thu, Oct 9, 2014 at 4:58 PM, Kelvin Loke wrote: > I used to be able to do it in Varnish 3, but it's no longer an easy task > in Varnish 4 (or my weary mind stops me from figuring out). > > How can I customized the error page (403, 404, 500, etc) from backend and > send to end users? I prefer not to let the public to view my application > error content. > > In Varnish 3 I am able to return the error() and specify the customized > error message in vcl_error. However in Varnish 4 I can't find any way to > return the synth() in vcl_backend_response and vcl_backend_error and > specify the custom message in vcl_synth(). > Have a look at "builtin.vcl" in the distribution. You'll find: # sub vcl_backend_error { # set beresp.http.Content-Type = "text/html; charset=utf-8"; # set beresp.http.Retry-After = "5"; # synthetic( {" # # You should be able to override it. -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com [image: Register now] -------------- next part -------------- An HTML attachment was scrubbed... URL: From kelvin1111111 at gmail.com Fri Oct 10 06:27:30 2014 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Fri, 10 Oct 2014 12:27:30 +0800 Subject: [Varnish 4] Custom Error Page from Backend In-Reply-To: References: Message-ID: It doesn't seem to work. Take the example of HTTP 404 from backend, it actually goes to vcl_backend_response, not vcl_backend_error. And unfortunately I am not allowed to use synthetic() in vcl_backend_response. Inside vcl_backend_response I also thought of passing a custom header beresp.http.X-CustomError = "YES" to vcl_deliver, but it's the same result that I am not allowed to use synthetic() in vcl_deliver. On Thu, Oct 9, 2014 at 11:45 PM, Per Buer wrote: > > > On Thu, Oct 9, 2014 at 4:58 PM, Kelvin Loke > wrote: > >> I used to be able to do it in Varnish 3, but it's no longer an easy task >> in Varnish 4 (or my weary mind stops me from figuring out). >> >> How can I customized the error page (403, 404, 500, etc) from backend and >> send to end users? I prefer not to let the public to view my application >> error content. >> > >> In Varnish 3 I am able to return the error() and specify the customized >> error message in vcl_error. However in Varnish 4 I can't find any way to >> return the synth() in vcl_backend_response and vcl_backend_error and >> specify the custom message in vcl_synth(). >> > > Have a look at "builtin.vcl" in the distribution. You'll find: > > # sub vcl_backend_error { > # set beresp.http.Content-Type = "text/html; charset=utf-8"; > # set beresp.http.Retry-After = "5"; > # synthetic( {" > # > # > > > You should be able to override it. > > -- > *Per Buer* > CTO | Varnish Software AS > Cell: +47 95839117 > We Make Websites Fly! > www.varnish-software.com > [image: Register now] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Fri Oct 10 09:28:38 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Fri, 10 Oct 2014 09:28:38 +0200 Subject: [Varnish 4] Custom Error Page from Backend In-Reply-To: References: Message-ID: <20141010072838.GF19870@nerd.dk> On Fri, Oct 10, 2014 at 12:27:30PM +0800, Kelvin Loke wrote: > > And unfortunately I am not allowed to use synthetic() in > vcl_backend_response. Inside vcl_backend_response I also thought of passing > a custom header beresp.http.X-CustomError = "YES" to vcl_deliver, but it's > the same result that I am not allowed to use synthetic() in vcl_deliver. You can return synth in deliver and go to vcl_synth. -- Andreas From kelvin1111111 at gmail.com Fri Oct 10 12:56:24 2014 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Fri, 10 Oct 2014 18:56:24 +0800 Subject: [Varnish 4] Custom Error Page from Backend In-Reply-To: <20141010072838.GF19870@nerd.dk> References: <20141010072838.GF19870@nerd.dk> Message-ID: ?>You can return synth in deliver and go to vcl_synth. Unfortunately vcl_deliver also doesn't allow me to return synth() https://www.varnish-cache.org/docs/trunk/users-guide/vcl-built-in-subs.html :( -------------- next part -------------- An HTML attachment was scrubbed... URL: From kokoniimasu at gmail.com Fri Oct 10 14:26:38 2014 From: kokoniimasu at gmail.com (kokoniimasu) Date: Fri, 10 Oct 2014 21:26:38 +0900 Subject: [Varnish 4] Custom Error Page from Backend In-Reply-To: References: <20141010072838.GF19870@nerd.dk> Message-ID: 4.0.2 is supported. >It is now possible to return(synth) from vcl_deliver. https://www.varnish-cache.org/trac/browser/doc/changes.rst?rev=bfe7cd ;) -- Shohei Tanaka(@xcir) http://blog.xcir.net/ (:3[__])(:3[__])(:3[__]) 2014-10-10 19:56 GMT+09:00 Kelvin Loke : >>You can return synth in deliver and go to vcl_synth. > > Unfortunately vcl_deliver also doesn't allow me to return synth() > > https://www.varnish-cache.org/docs/trunk/users-guide/vcl-built-in-subs.html > > :( > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From leandro at jovenclub.cu Fri Oct 10 20:45:44 2014 From: leandro at jovenclub.cu (Leandro de la Paz Cabrera) Date: Fri, 10 Oct 2014 14:45:44 -0400 Subject: odd behavior with ban.list Message-ID: <54382958.8080209@jovenclub.cu> Hi, I'm having a weird behavior with the ban list. When I do a manual request to Varnish using 'curl -X PURGE http://example.com/ ', the first time appears in the ban.list but if I repeat the request the list of banned items are fully cleaned. Any clue? regards ________________________________________________________________ XII Edicion del Evento Nacional de Informatica para Jovenes. INFOCLUB. Abril. 2015. Ver www.jovenclub.cu ________________________________________________________________ From leandro at jovenclub.cu Sat Oct 11 18:32:05 2014 From: leandro at jovenclub.cu (Leandro de la Paz Cabrera) Date: Sat, 11 Oct 2014 12:32:05 -0400 Subject: odd behavior with ban.list In-Reply-To: <54382958.8080209@jovenclub.cu> References: <54382958.8080209@jovenclub.cu> Message-ID: <54395B85.2060807@jovenclub.cu> My bad. Seems like, because the site is not working yet and therefore the varnish cache have not content yet, the ban expire too quick. regards ________________________________________________________________ XII Edicion del Evento Nacional de Informatica para Jovenes. INFOCLUB. Abril. 2015. Ver www.jovenclub.cu ________________________________________________________________ From jay at forecast.io Wed Oct 15 15:30:12 2014 From: jay at forecast.io (Jay LaPorte) Date: Wed, 15 Oct 2014 09:30:12 -0400 Subject: Are parallel ESI fetches on the roadmap? Message-ID: Hello, Varnish-Misc! (Apologies if this is the wrong mailing list for this question, or if it is answered elsewhere; neither Google nor the wiki were turning up anything when I searched.) It is with some surprise that I today discovered (through benchmarking with `varnishlog`) that Varnish does not support fetching ESI requests in parallel. I am in the process of developing an JSON API that makes heavy use of ESI?a response may commonly contain hundreds of ESI tags to its constituent parts?and if ESI requests are done serially, uncached (or even partially uncached) responses take an eternity. (Our use case is such that the individual parts of a request have a high cache hit rate, but the requests themselves do not.) >From looking around the web, it seems that Varnish 4.0 did the preliminary groundwork to make parallel ESI fetches possible, and that they are a planned feature: * https://www.varnish-cache.org/trac/wiki/Future_ESI * https://www.varnish-software.com/blog/varnish-40-qa-performance-vmods-ssl-ims-swr-and-more * https://twitter.com/ruben_varnish/status/522323253076172800 I was wondering if there was any rough roadmap or ETA for implementing it already in place, and if so, where I can track it's progress? If there is not, who should I contact regarding it? (Allocating funds to Varnish in order to help support this feature is not out of the question depending on the specifics, and furthermore I'd be very happy to help test it if need be.) Thanks for any information you can provide! Jay LaPorte -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhose0 at gmail.com Wed Oct 15 18:32:59 2014 From: dhose0 at gmail.com (Daniel) Date: Wed, 15 Oct 2014 18:32:59 +0200 Subject: beresp.do_stream and memory usage Message-ID: <543EA1BB.8080208@gmail.com> Hi, I have a question about beresp.do_stream and memory usage in Varnish 4.0.2. Example: The response (JSON) from backend is 73 kB in size uncompressed. Looking at the SMA.s0.g_bytes after one request I can see that Varnish allocated 132 kB memory for this request. When compression is enabled Varnish still allocates 132 kB of memory. I have also tested requests with different sizes and as long as the request is smaller than 132 kB Varnish seems to always allocate 132 kB of memory, even if the request is only 2 kB in size. If I then add beresp.do_stream = false in vcl_backend_reponse I can see that SMA.s0.g_bytes after one request is 74 kB and 22 kB with compression enabled. The backend is responding with "Transfer-Encoding: chunked", if that matters. I would like some help to understand why I am seeing this behaviour. Is it expected behaviour? Let me know if you need further information. Thanks! Regards, Daniel From dhose0 at gmail.com Thu Oct 16 09:54:13 2014 From: dhose0 at gmail.com (Daniel) Date: Thu, 16 Oct 2014 09:54:13 +0200 Subject: beresp.do_stream and memory usage In-Reply-To: <543EA1BB.8080208@gmail.com> References: <543EA1BB.8080208@gmail.com> Message-ID: <543F79A5.7040300@gmail.com> Hi again, On 2014-10-15 18:32, Daniel wrote: > I have a question about beresp.do_stream and memory usage in Varnish 4.0.2. > > Example: > > The response (JSON) from backend is 73 kB in size uncompressed. > > Looking at the SMA.s0.g_bytes after one request I can see that Varnish > allocated 132 kB memory for this request. When compression is enabled > Varnish still allocates 132 kB of memory. I have also tested requests > with different sizes and as long as the request is smaller than 132 kB > Varnish seems to always allocate 132 kB of memory, even if the request > is only 2 kB in size. > > If I then add beresp.do_stream = false in vcl_backend_reponse I can see > that SMA.s0.g_bytes after one request is 74 kB and 22 kB with > compression enabled. The backend is responding with "Transfer-Encoding: > chunked", if that matters. I would like some help to understand why I am > seeing this behaviour. Is it expected behaviour? > > > Let me know if you need further information. After some more testing I decided to add beresp.do_stream = false to our production Varnish. Our Varnish in production is configured with 9G malloc, 3m TTL and 60m grace. With this configuration around 70 000 objects was stored in the cache, all memory was used (according to SMA.s0.g_bytes) and the lru_nuked counter started to increase. After adding beresp.do_stream = false the number of objects in cache has slightly increased, SMA.s0.g_bytes has dropped to 330 MB and the hitrate is the same as before. Any ideas why I see such a dramatic decrease in memory usage when adding beresp.do_stream = false? Thanks! Regards, Daniel From phk at phk.freebsd.dk Thu Oct 16 10:16:23 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 16 Oct 2014 08:16:23 +0000 Subject: beresp.do_stream and memory usage In-Reply-To: <543EA1BB.8080208@gmail.com> References: <543EA1BB.8080208@gmail.com> Message-ID: <97316.1413447383@critter.freebsd.dk> -------- In message <543EA1BB.8080208 at gmail.com>, Daniel writes: >Hi, > >I have a question about beresp.do_stream and memory usage in Varnish 4.0.2. > >Example: > >The response (JSON) from backend is 73 kB in size uncompressed. > >Looking at the SMA.s0.g_bytes after one request I can see that Varnish >allocated 132 kB memory for this request. I guess your backend doesn't send a Content-Length: header, so Varnish will allocate storage in "fetch_chunksize" lumps. When streaming is enabled, we cannot (at present) trim surplus storage back, so you end up with 128K of storage by default. When you disable streaming, we can trim the storage. Depending on your average object size, you could try to reduce fetch_chunksize to something like 16K maybe. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From lkarsten at varnish-software.com Thu Oct 16 13:37:07 2014 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Thu, 16 Oct 2014 13:37:07 +0200 Subject: Are parallel ESI fetches on the roadmap? In-Reply-To: References: Message-ID: <20141016113706.GC4165@immer.varnish-software.com> On Wed, Oct 15, 2014 at 09:30:12AM -0400, Jay LaPorte wrote: > It is with some surprise that I today discovered (through benchmarking with > `varnishlog`) that Varnish does not support fetching ESI requests in > parallel. I am in the process of developing an JSON API that makes heavy > use of ESI?a response may commonly contain hundreds of ESI tags to its > constituent parts?and if ESI requests are done serially, uncached (or even > partially uncached) responses take an eternity. (Our use case is such that > the individual parts of a request have a high cache hit rate, but the > requests themselves do not.) > From looking around the web, it seems that Varnish 4.0 did the preliminary > groundwork to make parallel ESI fetches possible, and that they are a > planned feature: > * https://www.varnish-cache.org/trac/wiki/Future_ESI > * > https://www.varnish-software.com/blog/varnish-40-qa-performance-vmods-ssl-ims-swr-and-more > * https://twitter.com/ruben_varnish/status/522323253076172800 > > I was wondering if there was any rough roadmap or ETA for implementing it > already in place, and if so, where I can track it's progress? If there is > not, who should I contact regarding it? (Allocating funds to Varnish in > order to help support this feature is not out of the question depending on > the specifics, and furthermore I'd be very happy to help test it if need > be.) Hi. We (as in the core dev team) have discussed parallell ESI as a potential 4.1 feature. As it is now, we haven't decided on this since HTTP/2.0 needs to be catered for as well. I'd expect this to formalise a bit more on the next developer meeting in November. Currently you can perhaps hack/work around this by setting up Varnish as a backend to itself. Then the "inner" varnish can give you a slightly stale version (enable grace), while a background fetch is kicked off to renew the stored object. This should remove most of the extra latency, given that you in fact can cache the ESI subresources. Varnish Software (email: sales at varnish-software.com) may be able to help you out if you are willing to pay for the parallell ESI feature. -- Lasse Karstensen Varnish Software AS From bichonfrise74 at gmail.com Thu Oct 16 19:26:42 2014 From: bichonfrise74 at gmail.com (bichonfrise74) Date: Thu, 16 Oct 2014 10:26:42 -0700 Subject: Stats on Varnish Hash Keys Usage Message-ID: I'm using Varnish 3.0.2. Is there a way to see statistics on varnish hash keys? For example, assume my vcl_hash only contains url and host and a specific cookie, then I would like to get info about how many times it is being hit... For example, host_a+url_a+cookie = 10 host_b+url_b+cookie = 5 I'm not sure varnishlog/varnishtop would accomplish the above. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhose0 at gmail.com Fri Oct 17 12:33:49 2014 From: dhose0 at gmail.com (Daniel) Date: Fri, 17 Oct 2014 12:33:49 +0200 Subject: beresp.do_stream and memory usage In-Reply-To: <97316.1413447383@critter.freebsd.dk> References: <543EA1BB.8080208@gmail.com> <97316.1413447383@critter.freebsd.dk> Message-ID: <5440F08D.1080507@gmail.com> Hi, On 2014-10-16 10:16, Poul-Henning Kamp wrote: > I guess your backend doesn't send a Content-Length: header, so Varnish > will allocate storage in "fetch_chunksize" lumps. > > When streaming is enabled, we cannot (at present) trim surplus storage > back, so you end up with 128K of storage by default. > > When you disable streaming, we can trim the storage. > > Depending on your average object size, you could try to reduce > fetch_chunksize to something like 16K maybe. Yes, that's correct, the backend is not sending a Content-Length: header. Good to know, now I understand why Varnish has been using so much memory. Thanks! Regards, Daniel From dhose0 at gmail.com Fri Oct 17 13:30:38 2014 From: dhose0 at gmail.com (Daniel) Date: Fri, 17 Oct 2014 13:30:38 +0200 Subject: Assert error in vbf_fetch_thread() (Varnish 4.0.2) Message-ID: <5440FDDE.4070102@gmail.com> Hi again, Our Varnish died with this in the log: I found https://www.varnish-cache.org/trac/ticket/1596, but don't know if it is the same error? Oct 17 11:52:50 varnishd[28239]: Child (28241) died signal=6 Oct 17 11:52:50 varnishd[28239]: Child (28241) Panic message: #012Assert error in vbf_fetch_thread(), cache/cache_fetch.c line 842:#012 Condition(uu == bo->fetch_obj->len) not true.#012thread = (cache-worker)#012ident = Linux,2.6.32-431.17.1.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012 Backtrace:#012 0x43b4ed: /usr/sbin/varnishd() [0x43b4ed]#012 0x43b7fd: /usr/sbin/varnishd() [0x43b7fd]#012 0x425a9d: /usr/sbin/varnishd() [0x425a9d]#012 0x43e44f: /usr/sbin/varnishd(Pool_Work_Thread+0x4ce) [0x43e44f]#012 0x456f84: /usr/sbin/varnishd() [0x456f84]#012 0x4570ad: /usr/sbin/varnishd(WRK_thread+0x27) [0x4570ad]#012 0x3a9f6079d1: /lib64/libpthread.so.0() [0x3a9f6079d1]#012 0x3a9f2e8b5d: /lib64/libc.so.6(clone+0x6d) [0x3a9f2e8b5d]#012 busyobj = 0x7fbe0df6b020 {#012 ws = 0x7fbe0df6b0e0 {#012 id = "bo",#012 {s,f,r,e} = {0x7fbe0df6d008,+4232,(nil),+57368},#012 },#012 refcnt = 1#012 retries = 0#012 failed = 0#012 state = 3#012 is_do_gzip#012 is_do_pass#012 is_uncacheable#012 is_is_gunzip#012 bodystatus = 2 (chunked),#012 },#012 ws = 0x7fbe0df6b270 {#012 id = "obj",#012 {s,f,r,e} = {0x7fbf5e88d738,+664,(nil),+664},#012 },#012 objcore (FETCH) = 0x7fbe5c409b00 {#012 refcnt = 1#012 flags = 0x104#012 objhead = 0x7fc089854320#012 }#012 obj (FETCH) = 0x7fbf5e88d500 {#012 vxid = 3287679352,#012 http[obj] = {#012 ws = (nil)[]#012 "HTTP/1.1",#012 "200",#012 OK",#012 "Server: nginx/1.4.4",#012 "Date: Fri, 17 Oct 2014 09:51:59 GMT",#012 "Content-Type: application/hal+json; charset=utf-8",#012 "Status: 200 OK",#012 "X-Frame-Options: SAMEORIGIN",#012 "X-XSS-Protection: 1; mode=block",#012 "X-Content-Type-Options: nosniff",#012 "X-UA-Compatible: chrome=1",#012 "X-NewRelic-App-Data: XXXXX",#012 "Cache-Control: max-age=0, Oct 17 11:52:50 varnishd[28239]: child (8042) Started Oct 17 11:52:50 varnishd[28239]: Child (8042) said Child starts /Daniel From krjensen at ebay.com Fri Oct 17 14:41:58 2014 From: krjensen at ebay.com (Jensen, Kristian) Date: Fri, 17 Oct 2014 12:41:58 +0000 Subject: Assert error in vbf_fetch_thread() (Varnish 4.0.2) In-Reply-To: <5440FDDE.4070102@gmail.com> References: <5440FDDE.4070102@gmail.com> Message-ID: <10DB2D69D5617B45AA54F1AA801DE34D480002A4@DUB-EXDDA-S11.corp.ebay.com> We never found a solution to this problem, so we downgraded to 3... Best regards Kristian Jensen System Engineer | Site Operations | eBay Classifieds Group? Phone: +45 40226333 | krjensen at ebay.com ? ? -----Original Message----- From: varnish-misc-bounces+krjensen=ebay.com at varnish-cache.org [mailto:varnish-misc-bounces+krjensen=ebay.com at varnish-cache.org] On Behalf Of Daniel Sent: 17. oktober 2014 13:31 To: varnish-misc at varnish-cache.org Subject: Assert error in vbf_fetch_thread() (Varnish 4.0.2) Hi again, Our Varnish died with this in the log: I found https://www.varnish-cache.org/trac/ticket/1596, but don't know if it is the same error? Oct 17 11:52:50 varnishd[28239]: Child (28241) died signal=6 Oct 17 11:52:50 varnishd[28239]: Child (28241) Panic message: #012Assert error in vbf_fetch_thread(), cache/cache_fetch.c line 842:#012 Condition(uu == bo->fetch_obj->len) not true.#012thread = (cache-worker)#012ident = Linux,2.6.32-431.17.1.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012 Backtrace:#012 0x43b4ed: /usr/sbin/varnishd() [0x43b4ed]#012 0x43b7fd: /usr/sbin/varnishd() [0x43b7fd]#012 0x425a9d: /usr/sbin/varnishd() [0x425a9d]#012 0x43e44f: /usr/sbin/varnishd(Pool_Work_Thread+0x4ce) [0x43e44f]#012 0x456f84: /usr/sbin/varnishd() [0x456f84]#012 0x4570ad: /usr/sbin/varnishd(WRK_thread+0x27) [0x4570ad]#012 0x3a9f6079d1: /lib64/libpthread.so.0() [0x3a9f6079d1]#012 0x3a9f2e8b5d: /lib64/libc.so.6(clone+0x6d) [0x3a9f2e8b5d]#012 busyobj = 0x7fbe0df6b020 {#012 ws = 0x7fbe0df6b0e0 {#012 id = "bo",#012 {s,f,r,e} = {0x7fbe0df6d008,+4232,(nil),+57368},#012 },#012 refcnt = 1#012 retries = 0#012 failed = 0#012 state = 3#012 is_do_gzip#012 is_do_pass#012 is_uncacheable#012 is_is_gunzip#012 bodystatus = 2 (chunked),#012 },#012 ws = 0x7fbe0df6b270 {#012 id = "obj",#012 {s,f,r,e} = {0x7fbf5e88d738,+664,(nil),+664},#012 },#012 objcore (FETCH) = 0x7fbe5c409b00 {#012 refcnt = 1#012 flags = 0x104#012 objhead = 0x7fc089854320#012 }#012 obj (FETCH) = 0x7fbf5e88d500 {#012 vxid = 3287679352,#012 http[obj] = {#012 ws = (nil)[]#012 "HTTP/1.1",#012 "200",#012 OK",#012 "Server: nginx/1.4.4",#012 "Date: Fri, 17 Oct 2014 09:51:59 GMT",#012 "Content-Type: application/hal+json; charset=utf-8",#012 "Status: 200 OK",#012 "X-Frame-Options: SAMEORIGIN",#012 "X-XSS-Protection: 1; mode=block",#012 "X-Content-Type-Options: nosniff",#012 "X-UA-Compatible: chrome=1",#012 "X-NewRelic-App-Data: XXXXX",#012 "Cache-Control: max-age=0, Oct 17 11:52:50 varnishd[28239]: child (8042) Started Oct 17 11:52:50 varnishd[28239]: Child (8042) said Child starts /Daniel _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jay at forecast.io Mon Oct 20 20:12:09 2014 From: jay at forecast.io (Jay LaPorte) Date: Mon, 20 Oct 2014 14:12:09 -0400 Subject: Are parallel ESI fetches on the roadmap? In-Reply-To: <20141016113706.GC4165@immer.varnish-software.com> References: <20141016113706.GC4165@immer.varnish-software.com> Message-ID: > We (as in the core dev team) have discussed parallell ESI as a potential > 4.1 feature. > > As it is now, we haven't decided on this since HTTP/2.0 needs to be > catered for as well. I'd expect this to formalise a bit more on the next > developer meeting in November. Will notes be published publicly (say, on the wiki) for that? If so, I'll keep an eye out for them. > Currently you can perhaps hack/work around this by setting up Varnish as > a backend to itself. Then the "inner" varnish can give you a slightly > stale version (enable grace), while a background fetch is kicked off to > renew the stored object. This should remove most of the extra latency, > given that you in fact can cache the ESI subresources. While this would help for some use-cases (a large number of ESI includes to a relatively small number of repeatedly refreshed resources), it doesn't make much of a dent in our use-case (a large number of ESI includes to a more-or-less infinite number of more-or-less static resources that take some effort to compute initially). I appreciate the tip, though! > Varnish Software (email: sales at varnish-software.com) may be able to help > you out if you are willing to pay for the parallell ESI feature. I'll drop them a line for more information. Thanks for the information and suggestions! Jay LaPorte -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at pathwright.com Mon Oct 20 22:50:35 2014 From: greg at pathwright.com (Greg Taylor) Date: Mon, 20 Oct 2014 16:50:35 -0400 Subject: Random "Data not received" with Varnish behind ELB Message-ID: We've been running Amazon Elastic Load Balancer (ELB) as our front-facing reverse proxy and SSL terminator, with a pool of Django app servers behind it. This setup has worked very well for us for about four years now. To help withstand some bursty traffic from one of our customers, we worked Varnish in behind ELB and in front of our Django app servers. For the most part, this went over very well. The only issue is that some (but not all) of our users are now seeing intermittent "No data received" errors. This looks to mostly be happening with Chrome (but not Chromium on Linux). Here's what it looks like: http://imgur.com/HRkNO6u This error is seen every once in a while inconsistently when browsing around. Whether the page is a cache hit or miss doesn't seem to matter. One of our users has been able to replicate the issue by closing Chrome entirely, then visiting the site. I haven't been able to reproduce it at all on Chromium + Linux. If we yank Varnish out, the problem goes away immediately. Here's what our varnish config looks like atm: https://gist.github.com/gtaylor/ba1ea77b68bd84664e85 Here's our test site: http://littlepeople.pathwright.com Any help or ideas would be greatly appreciated. We'd really like to use Varnish for this upcoming traffic burst, but we had tons of complaints about this error when we flipped it on the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.eichelbroenner at lamp-solutions.de Mon Oct 20 23:06:39 2014 From: tobias.eichelbroenner at lamp-solutions.de (=?windows-1252?Q?Tobias_Eichelbr=F6nner?=) Date: Mon, 20 Oct 2014 23:06:39 +0200 Subject: Random "Data not received" with Varnish behind ELB In-Reply-To: References: Message-ID: <5445795F.1030802@lamp-solutions.de> Hi Greg, > To help withstand some bursty traffic from one of our customers, we > worked Varnish in behind ELB > If we yank Varnish out, the problem goes away immediately. Here's what > our varnish config looks like atm: We experienced trouble sending traffic bursts to ELB in the past. I would suggest putting a simple nginx for SSL termination in front of the Varnish and try to leave the ELB out. If one instance cannot handle the traffic, mybe DNS round-robin is a solution. Sincerely, Tobias -- LAMP solutions GmbH Gostenhofer Hauptstrasse 35 90443 Nuernberg Amtsgericht Nuernberg: HRB 22366 Geschaeftsfuehrer: Heiko Schubert Es gelten unsere allgemeinen Geschaeftsbedingungen. http://www.lamp-solutions.de/agbs/ Telefon : 0911 / 376 516 0 Fax : 0911 / 376 516 11 E-Mail : support at lamp-solutions.de Web : www.lamp-solutions.de Facebook : http://www.facebook.com/LAMPsolutions Twitter : http://twitter.com/#!/lampsolutions From greg at pathwright.com Mon Oct 20 23:10:49 2014 From: greg at pathwright.com (Greg Taylor) Date: Mon, 20 Oct 2014 17:10:49 -0400 Subject: Random "Data not received" with Varnish behind ELB In-Reply-To: <5445795F.1030802@lamp-solutions.de> References: <5445795F.1030802@lamp-solutions.de> Message-ID: I appreciate the suggestion, but neither of those are a great fit for us. I'm sure Varnish is up to the task of handling our relatively plain usage case. It's likely some sort of configuration error on our part, so I'd love to have eyes and ears on that gist I provided with our current setup. On Mon, Oct 20, 2014 at 5:06 PM, Tobias Eichelbr?nner < tobias.eichelbroenner at lamp-solutions.de> wrote: > Hi Greg, > > > To help withstand some bursty traffic from one of our customers, we > > worked Varnish in behind ELB > > > If we yank Varnish out, the problem goes away immediately. Here's what > > our varnish config looks like atm: > > We experienced trouble sending traffic bursts to ELB in the past. I > would suggest putting a simple nginx for SSL termination in front of the > Varnish and try to leave the ELB out. If one instance cannot handle the > traffic, mybe DNS round-robin is a solution. > > Sincerely, > > Tobias > > -- > LAMP solutions GmbH > Gostenhofer Hauptstrasse 35 > 90443 Nuernberg > > Amtsgericht Nuernberg: HRB 22366 > Geschaeftsfuehrer: Heiko Schubert > > Es gelten unsere allgemeinen Geschaeftsbedingungen. > http://www.lamp-solutions.de/agbs/ > > Telefon : 0911 / 376 516 0 > Fax : 0911 / 376 516 11 > E-Mail : support at lamp-solutions.de > Web : www.lamp-solutions.de > Facebook : http://www.facebook.com/LAMPsolutions > Twitter : http://twitter.com/#!/lampsolutions > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Greg Taylor, Pathwright Co-founder http://www.pathwright.com (864) 334-8735 -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at pathwright.com Mon Oct 20 23:48:18 2014 From: greg at pathwright.com (Greg Taylor) Date: Mon, 20 Oct 2014 17:48:18 -0400 Subject: Random "Data not received" with Varnish behind ELB In-Reply-To: References: Message-ID: We're terminating HTTPS on the ELB level, so Varnish doesn't really know or care what protocol is being used. On the ELB level, we have disabled SSLv3, but this particular quirk (The "No data received" error) occurs with and without SSLv3 in ELB's configuration. My initial theory was that something was timing out somewhere in the stack (ELB->Varnish->App server), but I haven't been able to back that up with anything concrete. I've upped the timeouts across the board to test the theory with no improvement in behavior. On Mon, Oct 20, 2014 at 5:24 PM, nick tailor wrote: > Are these doing any SSL? > > This could be related to poodle bug, a lot of people are disabling sslv3 > and only using tls1.0+. > > The handshake if using ssl could your problem, however just a guess based > on its around the same time poodle was released. > > Cheers > > Nick Tailor > nicktailor.com > > On Mon, Oct 20, 2014 at 1:50 PM, Greg Taylor wrote: > >> We've been running Amazon Elastic Load Balancer (ELB) as our front-facing >> reverse proxy and SSL terminator, with a pool of Django app servers behind >> it. This setup has worked very well for us for about four years now. >> >> To help withstand some bursty traffic from one of our customers, we >> worked Varnish in behind ELB and in front of our Django app servers. For >> the most part, this went over very well. The only issue is that some (but >> not all) of our users are now seeing intermittent "No data received" >> errors. This looks to mostly be happening with Chrome (but not Chromium on >> Linux). Here's what it looks like: >> >> http://imgur.com/HRkNO6u >> >> This error is seen every once in a while inconsistently when browsing >> around. Whether the page is a cache hit or miss doesn't seem to matter. One >> of our users has been able to replicate the issue by closing Chrome >> entirely, then visiting the site. I haven't been able to reproduce it at >> all on Chromium + Linux. >> >> If we yank Varnish out, the problem goes away immediately. Here's what >> our varnish config looks like atm: >> >> https://gist.github.com/gtaylor/ba1ea77b68bd84664e85 >> >> Here's our test site: >> >> http://littlepeople.pathwright.com >> >> Any help or ideas would be greatly appreciated. We'd really like to use >> Varnish for this upcoming traffic burst, but we had tons of complaints >> about this error when we flipped it on the first time. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > > -- > Nick Tailor > > Senior Linux Systems Administrator > BCLC, 2940 Virtual Way, Vancouver B.C. V5M 0A6 > T 604 228 3046 C 778 388 1397 > > Connect with us: > Twitter @BCLC | Twitter @BCLCGameSense > | YouTube > | Blog > | bclc.com > > Last year, more than $1 billion generated by BCLC gambling activities > went back into health care, education and community groups across B.C. > -- Greg Taylor, Pathwright Co-founder http://www.pathwright.com (864) 334-8735 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r_o_l_a_n_d at hotmail.com Tue Oct 21 19:27:15 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Tue, 21 Oct 2014 19:27:15 +0200 Subject: should i install from source if i wanted to use libvmod-secdown Message-ID: All, I have a new requirement which is to setup varnish with libvmod-secdown After a lot of googling, i have a couple of questions that i got stuck on. my questions are fairly simple and obviously newbie oriented as i come from windows/microsoft world. If i wanted to use libvmod-secdown with my varnish installation, should i compile from source and include it somewhere ? i tried using import secdown, but it's complaining that libvmod-secdown.so file doesn't exist when i downloaded the secdown repo from github and used "autogen.sh" no .so file was generated advice ? once again, i hope all you experts don't get annoyed from my newbie sort of questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r_o_l_a_n_d at hotmail.com Tue Oct 21 20:53:52 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Tue, 21 Oct 2014 20:53:52 +0200 Subject: should i install from source if i wanted to use libvmod-secdown In-Reply-To: References: Message-ID: managed to figure things out, i will post it here for anyone who might end up facing the same newbie issue: cd /opt/apt-get source varnish;cd varnish-3.0.5. autogen.sh ./configure --prefix=/usrmake # getting varnish module secdown cd /opt/wget https://github.com/footplus/libvmod-secdown/archive/master.zipunzip master.zipcd libvmod-secdown. autogen.sh./configure VARNISHSRC=/opt/varnish-3.0.5/makemake install# remember where it saved its libraries, right now it got saved under : /usr/lib/x86_64-linux-gnu/varnish/vmods cd /opt/varnish-3.0.5make checkmake install # import it in varnish config, and use it as you may From: r_o_l_a_n_d at hotmail.com To: varnish-misc at varnish-cache.org Subject: should i install from source if i wanted to use libvmod-secdown Date: Tue, 21 Oct 2014 19:27:15 +0200 All, I have a new requirement which is to setup varnish with libvmod-secdown After a lot of googling, i have a couple of questions that i got stuck on. my questions are fairly simple and obviously newbie oriented as i come from windows/microsoft world. If i wanted to use libvmod-secdown with my varnish installation, should i compile from source and include it somewhere ? i tried using import secdown, but it's complaining that libvmod-secdown.so file doesn't exist when i downloaded the secdown repo from github and used "autogen.sh" no .so file was generated advice ? once again, i hope all you experts don't get annoyed from my newbie sort of questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at pathwright.com Wed Oct 22 18:23:17 2014 From: greg at pathwright.com (Greg Taylor) Date: Wed, 22 Oct 2014 12:23:17 -0400 Subject: Random "Data not received" with Varnish behind ELB In-Reply-To: <20141020215039.GS17222@suricate.otoh.org> References: <20141020215039.GS17222@suricate.otoh.org> Message-ID: We managed to find what appears to be one of these failed requests: 2014-10-22T15:47:56.092374Z littlepeople 68.115.251.182:45368 - -1 -1 -1 408 0 0 0 "GET http://littlepeople.pathwright.com:80/ HTTP/1.1" Says request timeout. I've got my ELB timeout set to 6 seconds, and varnish's default idle_timeout is 10s. I have also added this: sub vcl_pipe { # Don't re-use backend connections. set bereq.http.Connection = "close"; } None of this seems to have made any difference, though. Any further ideas on how to handle this? On Mon, Oct 20, 2014 at 5:50 PM, Paul Armstrong wrote: > At 2014-10-20T16:50-0400, Greg Taylor wrote: > > We've been running Amazon Elastic Load Balancer (ELB) as our > > front-facing reverse proxy and SSL terminator, with a pool of Django > > app servers behind it. This setup has worked very well for us for > about > > four years now. > > Have you tried turning on ELB logging and seeing if the ELB reports > something interesting for such broken queries? > > Paul > -- Greg Taylor, Pathwright Co-founder http://www.pathwright.com (864) 334-8735 -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-cache at otoh.org Wed Oct 22 18:59:55 2014 From: varnish-cache at otoh.org (Paul Armstrong) Date: Wed, 22 Oct 2014 16:59:55 +0000 Subject: Random "Data not received" with Varnish behind ELB In-Reply-To: References: <20141020215039.GS17222@suricate.otoh.org> Message-ID: <20141022165954.GT17222@suricate.otoh.org> At 2014-10-22T12:23-0400, Greg Taylor wrote: > We managed to find what appears to be one of these failed requests: > 2014-10-22T15:47:56.092374Z littlepeople [1]68.115.251.182:45368 - -1 > -1 -1 408 0 0 0 "GET [2]http://littlepeople.pathwright.com:80/ > HTTP/1.1" What does varnishlog show at this point? If you don't see anything from varnishlog during a broken request, I would break out tshark to dump packets on the varnish box. Also check to see if the logs (or packets) when Chrome connects are different from when Firefox connects. It could be something like SPDY not interacting properly with some aspect of the system. > Says request timeout. I've got my ELB timeout set to 6 seconds, and > varnish's default idle_timeout is 10s. I have also added this: What's the backend webserver timeout? What happens if you set them all to have the same timeout? Given the generosity of your other timeouts, why is connect_timeout only 1 second? Especially combined with forcing all backend connections to be closed, this seems like a prime candidate for errors. Can you post a gist of "varnishstat -1"? On the ELB monitoring page, what do the statistics look like? Especially for the following: * Surge Queue Length * Spillover Count * Backend Connection Errors * Average Latency (tells you how close you're coming to those timeouts) You might find this useful reading: http://reinvent.kinvey.com/h/i/6206548-key-aws-elb-monitoring-metrics > sub vcl_pipe { > # Don't re-use backend connections. > set bereq.http.Connection = "close"; > } Don't do this. Re-opening connections is expensive. Paul From laszlo.danielisz at yahoo.com Thu Oct 23 13:28:53 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Thu, 23 Oct 2014 04:28:53 -0700 Subject: varnish load balance - HLS Message-ID: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> Hi, I'm trying to use varnish for load balance and cache in front of streaming servers. The stream is HTTP Live streaming based which means there are a bunch of playlists (m3u8) and chunk files (.ts). The thing is if I try to get a http page it is working, but I get no response if I'm trying to play a stream? Has anyone done this before? Thank you! Laszlo -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Thu Oct 23 13:40:51 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Thu, 23 Oct 2014 13:40:51 +0200 Subject: varnish load balance - HLS In-Reply-To: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> References: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> Message-ID: <20141023114051.GK19870@nerd.dk> On Thu, Oct 23, 2014 at 04:28:53AM -0700, Laszlo Danielisz wrote: > > I'm trying to use varnish for load balance and cache in front of streaming servers. > The stream is HTTP Live streaming based which means there are a bunch of playlists (m3u8) and chunk files (.ts). > The thing is if I try to get a http page it is working, but I get no response if I'm trying to play a stream? Define "no response", and send a varnishlog of the failed request. -- Andreas From jay at forecast.io Thu Oct 23 15:03:46 2014 From: jay at forecast.io (Jay LaPorte) Date: Thu, 23 Oct 2014 09:03:46 -0400 Subject: Are parallel ESI fetches on the roadmap? In-Reply-To: References: <20141016113706.GC4165@immer.varnish-software.com> Message-ID: On Thu, Oct 23, 2014 at 5:30 AM, Rub?n Romero wrote: > > On Mon, Oct 20, 2014 at 8:12 PM, Jay LaPorte wrote: >> >> Will notes be published publicly (say, on the wiki) for that? If so, I'll keep an eye out for them. > > Yes, all Developer meeting notes are available on the wiki (check the relevant meeting page or notes link): https://www.varnish-cache.org/trac/wiki/VDD Hi Rub?n, That's perfect. Thanks! Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.danielisz at yahoo.com Thu Oct 23 15:42:06 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Thu, 23 Oct 2014 06:42:06 -0700 Subject: varnish load balance - HLS In-Reply-To: <20141023114051.GK19870@nerd.dk> References: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> <20141023114051.GK19870@nerd.dk> Message-ID: <1414071726.35160.YahooMailNeo@web160705.mail.bf1.yahoo.com> Hi Andreas, I'm sending the varnishlog, I could not find anything about the request in it. What is interesting is that varnish reports the backends sick after a couple seconds BUT with "HTTP 200 OK", while the .url defined in the probe is still up. root at rdr00-cdn# varnishlog 0 WorkThread - 0x7fffff5fabf0 start 0 WorkThread - 0x7fffff3f9bf0 start 0 WorkThread - 0x7fffff1f8bf0 start 0 WorkThread - 0x7ffffeff7bf0 start 0 WorkThread - 0x7ffffedf6bf0 start 0 CLI - Rd vcl.load "boot" ./vcl.mY4NLfsb.so 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 0.000000 0 CLI - Wr 200 36 Loaded "./vcl.mY4NLfsb.so" as "boot" 0 CLI - Rd vcl.use "boot" 0 CLI - Wr 200 0 0 CLI - Rd start 0 WorkThread - 0x7ffffe7f3bf0 start 0 Debug - "Acceptor is kqueue" 0 CLI - Wr 200 0 0 WorkThread - 0x7ffffdbedbf0 start 0 WorkThread - 0x7ffffd9ecbf0 start 0 WorkThread - 0x7ffffd7ebbf0 start 0 WorkThread - 0x7ffffd5eabf0 start 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend2 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend0 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070353 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070356 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070360 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070363 1.0 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend0 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend2 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070366 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070369 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070372 1.0 0 Backend_health - backend3 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend0 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend2 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend1 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070375 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414070378 1.0 This is what I get with curl from varnish, and it times out after a while. The firewall os off. $ curl -v http://127.29.90.120/crossdomain.xml * About to connect() to 127.29.90.120 port 80 (#0) * Trying 127.29.90.120... * Adding handle: conn: 0x7fc720803000 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fc720803000) send_pipe: 1, recv_pipe: 0 ^C On Thursday, October 23, 2014 1:43 PM, Andreas Plesner Jacobsen wrote: On Thu, Oct 23, 2014 at 04:28:53AM -0700, Laszlo Danielisz wrote: > > I'm trying to use varnish for load balance and cache in front of streaming servers. > The stream is HTTP Live streaming based which means there are a bunch of playlists (m3u8) and chunk files (.ts). > The thing is if I try to get a http page it is working, but I get no response if I'm trying to play a stream? Define "no response", and send a varnishlog of the failed request. -- Andreas _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Thu Oct 23 16:27:30 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Thu, 23 Oct 2014 16:27:30 +0200 Subject: varnish load balance - HLS In-Reply-To: <1414071007.74024.YahooMailNeo@web160702.mail.bf1.yahoo.com> References: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> <20141023114051.GK19870@nerd.dk> <1414071007.74024.YahooMailNeo@web160702.mail.bf1.yahoo.com> Message-ID: <20141023142730.GL19870@nerd.dk> On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: > > I'm sending the varnishlog, I could not find anything about the request in it. Either you have very high timeouts in varnish or you're not hitting varnish. This looks like varnish 3, so try disabling log collation with varnishlog -O > What is interesting is that varnish reports the backends sick after a couple > seconds BUT with "HTTP 200 OK", while the .url defined in the probe is still > up. Did you configure a 0 second timeout for those probes? -- Andreas From stef at scaleengine.com Thu Oct 23 18:42:47 2014 From: stef at scaleengine.com (Stefan Caunter) Date: Thu, 23 Oct 2014 12:42:47 -0400 Subject: varnish load balance - HLS In-Reply-To: <20141023142730.GL19870@nerd.dk> References: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> <20141023114051.GK19870@nerd.dk> <1414071007.74024.YahooMailNeo@web160702.mail.bf1.yahoo.com> <20141023142730.GL19870@nerd.dk> Message-ID: On Thu, Oct 23, 2014 at 10:27 AM, Andreas Plesner Jacobsen wrote: > On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: >> >> I'm sending the varnishlog, I could not find anything about the request in it. > > Either you have very high timeouts in varnish or you're not hitting varnish. > This looks like varnish 3, so try disabling log collation with varnishlog -O > >> What is interesting is that varnish reports the backends sick after a couple >> seconds BUT with "HTTP 200 OK", while the .url defined in the probe is still >> up. > > Did you configure a 0 second timeout for those probes? Can we get some confirmation that curl request to the backend produces valid response? If wowza is not set to respond correctly to request, varnish is not the place to start troubleshooting. ---- Stefan Caunter From laszlo.danielisz at yahoo.com Thu Oct 23 21:24:24 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Thu, 23 Oct 2014 12:24:24 -0700 Subject: varnish load balance - HLS In-Reply-To: Message-ID: <1414092264.57435.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> If I access Wowza directly I get the proper response, the streams are working from all four of the back ends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.danielisz at yahoo.com Thu Oct 23 21:25:26 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Thu, 23 Oct 2014 12:25:26 -0700 Subject: varnish load balance - HLS In-Reply-To: <20141023142730.GL19870@nerd.dk> Message-ID: <1414092326.84298.YahooMailAndroidMobile@web160701.mail.bf1.yahoo.com> I can look this up tomorrow and will let you know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef at scaleengine.com Fri Oct 24 03:20:25 2014 From: stef at scaleengine.com (Stefan Caunter) Date: Thu, 23 Oct 2014 21:20:25 -0400 Subject: varnish load balance - HLS In-Reply-To: <1414092264.57435.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> References: <1414092264.57435.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> Message-ID: Alright, is wowza set to be an HTTP origin? Can you get a .ts list from one varnish if you hit it directly? ---- Stefan Caunter ScaleEngine Inc. E: stefan.caunter at scaleengine.com Skype: stefan.caunter Toll Free Direct: +1 800 280 6042 Toronto Canada On Thu, Oct 23, 2014 at 3:24 PM, Laszlo Danielisz < laszlo.danielisz at yahoo.com> wrote: > If I access Wowza directly I get the proper response, the streams are > working from all four of the back ends. > From:"Stefan Caunter" > Date:Thu, Oct 23, 2014 at 18:45 > Subject:Re: varnish load balance - HLS > > On Thu, Oct 23, 2014 at 10:27 AM, Andreas Plesner Jacobsen > wrote: > > On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: > >> > >> I'm sending the varnishlog, I could not find anything about the request > in it. > > > > Either you have very high timeouts in varnish or you're not hitting > varnish. > > This looks like varnish 3, so try disabling log collation with > varnishlog -O > > > >> What is interesting is that varnish reports the backends sick after a > couple > >> seconds BUT with "HTTP 200 OK", while the .url defined in the probe is > still > >> up. > > > > Did you configure a 0 second timeout for those probes? > > Can we get some confirmation that curl request to the backend produces > valid response? If wowza is not set to respond correctly to request, > varnish is not the place to start troubleshooting. > > > ---- > > Stefan Caunter > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.mills at auckland.ac.nz Fri Oct 24 03:23:06 2014 From: c.mills at auckland.ac.nz (Clark Mills) Date: Fri, 24 Oct 2014 14:23:06 +1300 Subject: Old Android browser sending through some UPPER case in the host field... And erroring... Message-ID: <5449A9FA.7070100@auckland.ac.nz> Hi all. We're getting an error... Error 503 Backend fetch failed Guru Meditation: XID: 123456 Varnish cache server ...when going to a site we host, http://Example.com/ when using an old mobile device (GT-S5660). If we get the browser to lowercase the hostname: http://example.com/ then everything behaves. Is there a way to make it so that the hostname is forced lowercase? I recognise that there might be a slight performance cost. File: varnish.log * << BeReq >> 328010 - Begin bereq 328009 pass - Timestamp Start: 1414100695.527676 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: Example.com - BereqHeader Accept-Encoding: gzip - BereqHeader Accept-Language: en-NZ, en-US - BereqHeader x-wap-profile: http://wap.samsungmobile.com/uaprof/GT-S5660.xml - BereqHeader User-Agent: Mozilla/5.0 (Linux; U; Android 2.3.6; en-nz; GT-S5660 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 - BereqHeader Cookie: SESS9f2bcbf4975f5536ef0daf298ed65496=5hrnD05HnvODku5BDRk2L_uu-L4GP0PcrGXEpYC5tqw; has_js=1 - BereqHeader Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 - BereqHeader Accept-Charset: utf-8, iso-8859-1, utf-16, *;q=0.7 - BereqHeader X-Forwarded-For: 172.24.75.241 - BereqHeader X-Varnish: 328010 - VCL_call BACKEND_FETCH - VCL_return fetch - FetchError no backend connection - Timestamp Beresp: 1414100695.527838 0.000162 0.000162 - Timestamp Error: 1414100695.527847 0.000171 0.000009 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Thu, 23 Oct 2014 21:44:55 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc Transient - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Thu, 23 Oct 2014 21:44:55 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 418 - BereqAcct 0 0 0 0 0 0 - End I tried being clever and attepmted tolower but it just hung for me. I probably am showing my naivety... It compiled so it must be correct! :) File: default.vcl import std; ... sub vcl_recv { set req.http.host = std.tolower(req.http.host); } Many thanks... Clark From mattias at nucleus.be Fri Oct 24 09:41:45 2014 From: mattias at nucleus.be (Mattias Geniar) Date: Fri, 24 Oct 2014 07:41:45 +0000 Subject: Determine if grace object is available In-Reply-To: References: <7C954D35-328D-49B1-B854-302A26CC1981@dragondata.com> <20140723130343.GB11654@immer.varnish-software.com> Message-ID: >I?d like to get fancy with grace stored objects, but I?m not sure how >>>to do this. Can I determine if there?s a grace object I could deliver? >>>Basically I want my logic to be: > >As a follow-up to this thread, I'm wondering if the following is possible, >given that there is a director present with 2 servers, having health >probes configured. > >If the director has no healthy backends; >1. See if a grace object is available, if so, deliver it >2. If no grace object is available, change the backend to a "maintenance" >one to serve a static HTML page for maintenance purposes > >The struggle is in vcl_recv {}, how would this be able to work? If I use >req.backend.healthy to determine the backend health to set a new backend, >I lose the grace ability (as it'll be passed to the new, available, >backend?). Or I'm missing something here. > ># Use grace, when available and when needed >if (! req.backend.healthy) { > # Backends are sick, so fall back to a stale object, if possible > set req.grace = 5m; > > # If no stale object is available, how should we switch to a new >backend? > set req.backend = maintenance_backend; # This could serve static pages >with maintenance info >} > > >I'm thinking something like this, but it's not possible? >if (req.grace.available) { > set req.grace = 5m; >} else { > # No grace object available, set new backend > set req.backend = maintenance_backend; >} Did anyone have any valuable input on this, or should we just assume it's not possible in the current versions? Regards, Mattias From laszlo.danielisz at yahoo.com Fri Oct 24 21:47:55 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Fri, 24 Oct 2014 12:47:55 -0700 Subject: varnish load balance - HLS In-Reply-To: References: <1414092264.57435.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> Message-ID: <1414180075.84173.YahooMailNeo@web160701.mail.bf1.yahoo.com> Yes, I can reach the .m3u8 and .ts files from wowza with curl/wget On Friday, October 24, 2014 4:20 AM, Stefan Caunter wrote: Alright, is wowza set to be an HTTP origin? Can you get a .ts list from one varnish if you hit it directly? ---- Stefan Caunter ScaleEngine Inc. E: stefan.caunter at scaleengine.com Skype: stefan.caunter Toll Free Direct: +1 800 280 6042 Toronto Canada On Thu, Oct 23, 2014 at 3:24 PM, Laszlo Danielisz wrote: If I access Wowzadirectly I get the proper response, the streams are working from all four of the back ends. > >From:"Stefan Caunter" >Date:Thu, Oct 23, 2014 at 18:45 >Subject:Re: varnish load balance - HLS > > >On Thu, Oct 23, 2014 at 10:27 AM, Andreas Plesner Jacobsen wrote: >> On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: >>> >>> I'm sending the varnishlog, I could not find anything about the request in it. >> >> Either you have very high timeouts in varnish or you're not hitting varnish. >> This looks like varnish 3, so try disabling log collation with varnishlog -O >> >>> What is interesting is that varnish reports the backends sick after a couple >>> seconds BUT with "HTTP 200 OK", while the .url defined in the probe is still >>> up. >> >> Did you configure a 0 second timeout for those probes? > >Can we get some confirmation that curl request to the backend produces >valid response? If wowza is not set to respond correctly to request, >varnish is not the place to start troubleshooting. > > >---- > >Stefan Caunter > > >_______________________________________________ >varnish-misc mailing list >varnish-misc at varnish-cache.org >https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.danielisz at yahoo.com Fri Oct 24 21:53:33 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Fri, 24 Oct 2014 12:53:33 -0700 Subject: varnish load balance - HLS In-Reply-To: <1414092326.84298.YahooMailAndroidMobile@web160701.mail.bf1.yahoo.com> References: <20141023142730.GL19870@nerd.dk> <1414092326.84298.YahooMailAndroidMobile@web160701.mail.bf1.yahoo.com> Message-ID: <1414180413.86060.YahooMailNeo@web160706.mail.bf1.yahoo.com> Timeout is set to 1 second for each probe Here is a varnishlog -O # varnishlog -O 0 Backend_health - backend1 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180168 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180171 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180174 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180177 1.0 0 Backend_health - backend2 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend0 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend1 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180180 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180183 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1414180186 1.0 0 Backend_health - backend2 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend3 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend0 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T 0 Backend_health - backend1 Went sick 4--X--- 2 3 5 0.000000 0.000000 HTTP/1.1 200 OK Accept-Ranges: bytes Server: WowzaStreamingEngine/4.1.0 Cache-Control: no-cache Connection: close Content-T On Thursday, October 23, 2014 10:25 PM, Laszlo Danielisz wrote: I can look this up tomorrow and will let you know. From:"Andreas Plesner Jacobsen" Date:Thu, Oct 23, 2014 at 16:29 Subject:Re: varnish load balance - HLS On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: > > I'm sending the varnishlog, I could not find anything about the request in it. Either you have very high timeouts in varnish or you're not hitting varnish. This looks like varnish 3, so try disabling log collation with varnishlog -O > What is interesting is that varnish reports the backends sick after a couple > seconds BUT with "HTTP 200 OK", while the .url defined in the probe is still > up. Did you configure a 0 second timeout for those probes? -- Andreas _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef at scaleengine.com Sat Oct 25 09:29:59 2014 From: stef at scaleengine.com (Stefan Caunter) Date: Sat, 25 Oct 2014 03:29:59 -0400 Subject: varnish load balance - HLS In-Reply-To: <1414180075.84173.YahooMailNeo@web160701.mail.bf1.yahoo.com> References: <1414092264.57435.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> <1414180075.84173.YahooMailNeo@web160701.mail.bf1.yahoo.com> Message-ID: Do you get m3u8 list from a varnish? It's basic HTTP object caching. Do all varnishes return the same result/headers? What is the LB algorithm? ---- Stefan Caunter ScaleEngine Inc. E: stefan.caunter at scaleengine.com Skype: stefan.caunter Toll Free Direct: +1 800 280 6042 Toronto Canada On Fri, Oct 24, 2014 at 3:47 PM, Laszlo Danielisz < laszlo.danielisz at yahoo.com> wrote: > Yes, I can reach the .m3u8 and .ts files from wowza with curl/wget > > > On Friday, October 24, 2014 4:20 AM, Stefan Caunter < > stef at scaleengine.com> wrote: > > > Alright, is wowza set to be an HTTP origin? Can you get a .ts list from > one varnish if you hit it directly? > > > > ---- > > Stefan Caunter > ScaleEngine Inc. > > E: stefan.caunter at scaleengine.com > Skype: stefan.caunter > Toll Free Direct: +1 800 280 6042 > Toronto Canada > > On Thu, Oct 23, 2014 at 3:24 PM, Laszlo Danielisz < > laszlo.danielisz at yahoo.com> wrote: > > If I access Wowza directly I get the proper response, the streams are > working from all four of the back ends. > From:"Stefan Caunter" > Date:Thu, Oct 23, 2014 at 18:45 > Subject:Re: varnish load balance - HLS > > On Thu, Oct 23, 2014 at 10:27 AM, Andreas Plesner Jacobsen > wrote: > > On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: > >> > >> I'm sending the varnishlog, I could not find anything about the request > in it. > > > > Either you have very high timeouts in varnish or you're not hitting > varnish. > > This looks like varnish 3, so try disabling log collation with > varnishlog -O > > > >> What is interesting is that varnish reports the backends sick after a > couple > >> seconds BUT with "HTTP 200 OK", while the .url defined in the probe is > still > >> up. > > > > Did you configure a 0 second timeout for those probes? > > Can we get some confirmation that curl request to the backend produces > valid response? If wowza is not set to respond correctly to request, > varnish is not the place to start troubleshooting. > > > ---- > > Stefan Caunter > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.zhdanov at gmail.com Sat Oct 25 09:51:21 2014 From: denis.zhdanov at gmail.com (Denis Zhdanov) Date: Sat, 25 Oct 2014 09:51:21 +0200 Subject: [Varnish 3] strange memory consumption after upgrade 3.0.2 -> 3.0.5 Message-ID: Hi All, We are trying to upgrade our varnish from pretty old 3.0.2 to 3.0.5 (3.0.6 was just released, we'll try it too, but now we are working with 3.0.5) We are using only RAM storage, and before upgrade it consume about 90G after 8 hours and stay like that. But it seems there's some leak in 3.0.6 - it doesn't stop in RAM consuming until its ran out. It's not fast process - can took a week, but after that it starts swapping and we are forced to restart it (and lost all cache). I tried to compare varnishstat from "bad" and "good" node but didn't find anything strange - at least SMA.ram.g_bytes is approximately same for both nodes. Could someone please take a look? Good varnishstat -1 --------------- epsvarnish_good: client_conn 210291617 5.38 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 19264242207 493.21 Client requests received cache_hit 17890026068 458.02 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1373994639 35.18 Cache misses backend_conn 13241605 0.34 Backend conn. success backend_unhealthy 926862 0.02 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 47 0.00 Backend conn. failures backend_reuse 1359807120 34.81 Backend conn. reuses backend_toolate 13870 0.00 Backend conn. was closed backend_recycle 1359883733 34.82 Backend conn. recycles backend_retry 44845 0.00 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 1373044190 35.15 Fetch with Length fetch_chunked 950 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 100547 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 3722 . N struct sess_mem n_sess 1870 . N struct sess n_object 4007618 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 4009072 . N struct objectcore n_objecthead 4023957 . N struct objecthead n_waitinglist 6689775 . N struct waitinglist n_vbc 20 . N struct vbc n_wrk 1600 . N worker threads n_wrk_create 1617 0.00 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 1703 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 25190657 . N expired objects n_lru_nuked 1343746322 . N LRU nuked objects n_lru_moved 16549646438 . N LRU moved objects losthdr 1771 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 19157759781 490.48 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 210297216 5.38 Total Sessions s_req 19264242207 493.21 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1372944593 35.15 Total fetch s_hdrbytes 7493165988804 191841.77 Total header bytes s_bodybytes 230757381423391 5907903.79 Total body bytes sess_closed 7877791 0.20 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 19261784636 493.14 Session Linger sess_herd 12270879099 314.16 Session herd shm_records 796058319688 20380.87 SHM records shm_writes 34949444335 894.78 SHM writes shm_flushes 3 0.00 SHM flushes due to overflow shm_cont 213814789 5.47 SHM MTX contention shm_cycles 343350 0.01 SHM cycles through buffer sms_nreq 1271021 0.03 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 578078958 . SMS bytes allocated sms_bfree 578078958 . SMS bytes freed backend_req 1373106504 35.15 Backend requests made n_vcl 8 0.00 N vcl total n_vcl_avail 8 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 19230468606 492.34 HCB Lookups without lock hcb_lock 1371994951 35.13 HCB Lookups with lock hcb_insert 1371991264 35.13 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 39059096 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 1 . Loaded VMODs n_gzip 7 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations VBE.autojournaal(10.32.230.188,,8484).vcls 5 . VCL references VBE.autojournaal(10.32.230.188,,8484).happy18446744073709551615 . Happy health probes VBE.beslist(10.32.230.188,,8484).vcls 1 . VCL references VBE.beslist(10.32.230.188,,8484).happy 18446744073709551615 . Happy health probes LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 3813063 0.10 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 6898140689 176.61 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 2740196312 70.16 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 16514901 0.42 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 3722 0.00 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 210376778 5.39 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 154004420 3.94 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 31 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 16 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 25196944400 645.10 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 1371962780 35.13 Created locks LCK.objhdr.destroy 1367969515 35.02 Destroyed locks LCK.objhdr.locks 79782749795 2042.62 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 2780936064 71.20 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 2716690775 69.55 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 2227237 0.06 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 2781138078 71.20 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 9013627 0.23 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 26485466 0.68 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 3 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 2773680373 71.01 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.ram.c_req 4109962205 105.22 Allocator requests SMA.ram.c_fail 52455807295423 1342985.70 Allocator failures SMA.ram.c_bytes 27203969692807 696482.32 Bytes allocated SMA.ram.c_freed 27128807778988 694558.00 Bytes freed SMA.ram.g_alloc 8016209 . Allocations outstanding SMA.ram.g_bytes 75161913819 . Bytes outstanding SMA.ram.g_space 13861 . Bytes available SMA.Transient.c_req 47750773 1.22 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 58471006951 1496.99 Bytes allocated SMA.Transient.c_freed 58470979290 1496.99 Bytes freed SMA.Transient.g_alloc 22 . Allocations outstanding SMA.Transient.g_bytes 27661 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.proxy(10.x.x.x,,80).vcls 8 . VCL references VBE.proxy(10.x.x.x,,80).happy 18446744073709551615 . Happy health probes --------------- Bad varnishstat -1 --------------- epsvarnish_bad: client_conn 2661600 6.43 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 238438255 575.65 Client requests received cache_hit 213261383 514.87 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 25173891 60.78 Cache misses backend_conn 397844 0.96 Backend conn. success backend_unhealthy 18139 0.04 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 2675 0.01 Backend conn. failures backend_reuse 24825052 59.93 Backend conn. reuses backend_toolate 45 0.00 Backend conn. was closed backend_recycle 24825811 59.94 Backend conn. recycles backend_retry 70506 0.17 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 25137933 60.69 Fetch with Length fetch_chunked 10 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 530 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 4732 . N struct sess_mem n_sess 148 . N struct sess n_object 4003646 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 4004774 . N struct objectcore n_objecthead 4015924 . N struct objecthead n_waitinglist 1588 . N struct waitinglist n_vbc 22 . N struct vbc n_wrk 1600 . N worker threads n_wrk_create 3068 0.01 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 13255 0.03 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 263264 . N expired objects n_lru_nuked 20870512 . N LRU nuked objects n_lru_moved 200584889 . N LRU moved objects losthdr 8 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 236130810 570.08 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 2661694 6.43 Total Sessions s_req 238438255 575.65 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 25137413 60.69 Total fetch s_hdrbytes 117632270881 283995.98 Total header bytes s_bodybytes 2917193607304 7042890.96 Total body bytes sess_closed 169195 0.41 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 238353273 575.45 Session Linger sess_herd 148971811 359.66 Session herd shm_records 10788946827 26047.42 SHM records shm_writes 446371290 1077.66 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 2328506 5.62 SHM MTX contention shm_cycles 4750 0.01 SHM cycles through buffer sms_nreq 39446 0.10 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 18106011 . SMS bytes allocated sms_bfree 18106011 . SMS bytes freed backend_req 25222980 60.90 Backend requests made n_vcl 2 0.00 N vcl total n_vcl_avail 2 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_gone 1 . N total gone bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 238293148 575.30 HCB Lookups without lock hcb_lock 25165922 60.76 HCB Lookups with lock hcb_insert 25165889 60.76 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 414204 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 1 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations sess_pipe_overflow 0 . Dropped sessions due to session pipe overflow LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 118338 0.29 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 114658606 276.82 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 46330459 111.85 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 175044 0.42 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 2666278 6.44 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 2667059 6.44 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 1316488 3.18 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 11717 0.03 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 16 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 305048412 736.47 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 25167014 60.76 Created locks LCK.objhdr.destroy 21151796 51.07 Destroyed locks LCK.objhdr.locks 996006829 2404.63 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 46685330 112.71 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 46007928 111.08 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 88654 0.21 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 46686411 112.71 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 125227 0.30 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 801030 1.93 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 3 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 50871308 122.82 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.ram.c_req 71969321 173.75 Allocator requests SMA.ram.c_fail 22016345 53.15 Allocator failures SMA.ram.c_bytes 504148551835 1217150.37 Bytes allocated SMA.ram.c_freed 428986641816 1035689.28 Bytes freed SMA.ram.g_alloc 8008174 . Allocations outstanding SMA.ram.g_bytes 75161910019 . Bytes outstanding SMA.ram.g_space 17661 . Bytes available SMA.Transient.c_req 368754 0.89 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 463851431 1119.86 Bytes allocated SMA.Transient.c_freed 463818676 1119.78 Bytes freed SMA.Transient.g_alloc 26 . Allocations outstanding SMA.Transient.g_bytes 32755 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.proxy(10.x.x.x,,80).vcls 2 . VCL references VBE.proxy(10.x.x.x,,80).happy18446744073709551615 . Happy health probes --------------- Command line /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/varnish.vcl -T :6082 -S /etc/varnish/secret -s ram=malloc,70G -p thread_pool_add_delay=2 -p http_gzip_support=on -p thread_pools=16 -p default_ttl=7200 -p thread_pool_min=100 -p thread_pool_max=4000 -p session_linger=50 -p sess_workspace=1048576 -p shm_workspace=8192 Server is PowerEdge M610, 96G RAM, 2xXeon E5620+HT (16 cores). Good system running on Debian Squeeze, Bad one - on Ubuntu Precise (yep, I know, it was a bad idea to change varnish and OS in same time *sigh*) With best regards, Denis -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.danielisz at yahoo.com Sat Oct 25 20:26:21 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Sat, 25 Oct 2014 11:26:21 -0700 Subject: varnish load balance - HLS In-Reply-To: Message-ID: <1414261581.37158.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> I can't get any m3u8 files from varnish. I'm using round-robin and I have one varnish for now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.zhdanov at gmail.com Sat Oct 25 22:31:06 2014 From: denis.zhdanov at gmail.com (Denis Zhdanov) Date: Sat, 25 Oct 2014 22:31:06 +0200 Subject: [Varnish 3] strange memory consumption after upgrade 3.0.2 -> 3.0.5 In-Reply-To: References: Message-ID: Hi All, Responding to myself. It seems that increasing vm.max_map_count to e.g. 262120 (instead of default 65530) and restart helps. (3.0.2 on Debian Squeeze running fine on default 65530). WBR, Denis 2014-10-25 9:51 GMT+02:00 Denis Zhdanov : > Hi All, > > We are trying to upgrade our varnish from pretty old 3.0.2 to 3.0.5 (3.0.6 > was just released, we'll try it too, but now we are working with 3.0.5) > We are using only RAM storage, and before upgrade it consume about 90G > after 8 hours and stay like that. But it seems there's some leak in 3.0.6 - > it doesn't stop in RAM consuming until its ran out. It's not fast process - > can took a week, but after that it starts swapping and we are forced to > restart it (and lost all cache). > I tried to compare varnishstat from "bad" and "good" node but didn't find > anything strange - at least SMA.ram.g_bytes is approximately same for both > nodes. > Could someone please take a look? > > Good varnishstat -1 > --------------- > epsvarnish_good: > client_conn 210291617 5.38 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess/wrk > client_req 19264242207 493.21 Client requests received > cache_hit 17890026068 458.02 Cache hits > cache_hitpass 0 0.00 Cache hits for pass > cache_miss 1373994639 35.18 Cache misses > backend_conn 13241605 0.34 Backend conn. success > backend_unhealthy 926862 0.02 Backend conn. not attempted > backend_busy 0 0.00 Backend conn. too many > backend_fail 47 0.00 Backend conn. failures > backend_reuse 1359807120 34.81 Backend conn. reuses > backend_toolate 13870 0.00 Backend conn. was closed > backend_recycle 1359883733 34.82 Backend conn. recycles > backend_retry 44845 0.00 Backend conn. retry > fetch_head 0 0.00 Fetch head > fetch_length 1373044190 35.15 Fetch with Length > fetch_chunked 950 0.00 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 0 0.00 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed 100547 0.00 Fetch failed > fetch_1xx 0 0.00 Fetch no body (1xx) > fetch_204 0 0.00 Fetch no body (204) > fetch_304 0 0.00 Fetch no body (304) > n_sess_mem 3722 . N struct sess_mem > n_sess 1870 . N struct sess > n_object 4007618 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 4009072 . N struct objectcore > n_objecthead 4023957 . N struct objecthead > n_waitinglist 6689775 . N struct waitinglist > n_vbc 20 . N struct vbc > n_wrk 1600 . N worker threads > n_wrk_create 1617 0.00 N worker threads created > n_wrk_failed 0 0.00 N worker threads not created > n_wrk_max 0 0.00 N worker threads limited > n_wrk_lqueue 0 0.00 work request queue length > n_wrk_queued 1703 0.00 N queued work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 3 . N backends > n_expired 25190657 . N expired objects > n_lru_nuked 1343746322 . N LRU nuked objects > n_lru_moved 16549646438 . N LRU moved objects > losthdr 1771 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 19157759781 490.48 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing workspace > s_sess 210297216 5.38 Total Sessions > s_req 19264242207 493.21 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 0 0.00 Total pass > s_fetch 1372944593 35.15 Total fetch > s_hdrbytes 7493165988804 191841.77 Total header bytes > s_bodybytes 230757381423391 5907903.79 Total body bytes > sess_closed 7877791 0.20 Session Closed > sess_pipeline 0 0.00 Session Pipeline > sess_readahead 0 0.00 Session Read Ahead > sess_linger 19261784636 493.14 Session Linger > sess_herd 12270879099 314.16 Session herd > shm_records 796058319688 20380.87 SHM records > shm_writes 34949444335 894.78 SHM writes > shm_flushes 3 0.00 SHM flushes due to overflow > shm_cont 213814789 5.47 SHM MTX contention > shm_cycles 343350 0.01 SHM cycles through buffer > sms_nreq 1271021 0.03 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 578078958 . SMS bytes allocated > sms_bfree 578078958 . SMS bytes freed > backend_req 1373106504 35.15 Backend requests made > n_vcl 8 0.00 N vcl total > n_vcl_avail 8 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_ban 1 . N total active bans > n_ban_add 1 0.00 N new bans added > n_ban_retire 0 0.00 N old bans deleted > n_ban_obj_test 0 0.00 N objects tested > n_ban_re_test 0 0.00 N regexps tested against > n_ban_dups 0 0.00 N duplicate bans removed > hcb_nolock 19230468606 492.34 HCB Lookups without lock > hcb_lock 1371994951 35.13 HCB Lookups with lock > hcb_insert 1371991264 35.13 HCB Inserts > esi_errors 0 0.00 ESI parse errors (unlock) > esi_warnings 0 0.00 ESI parse warnings (unlock) > accept_fail 0 0.00 Accept failures > client_drop_late 0 0.00 Connection dropped late > uptime 39059096 1.00 Client uptime > dir_dns_lookups 0 0.00 DNS director lookups > dir_dns_failed 0 0.00 DNS director failed lookups > dir_dns_hit 0 0.00 DNS director cached lookups hit > dir_dns_cache_full 0 0.00 DNS director full dnscache > vmods 1 . Loaded VMODs > n_gzip 7 0.00 Gzip operations > n_gunzip 0 0.00 Gunzip operations > VBE.autojournaal(10.32.230.188,,8484).vcls 5 . VCL > references > VBE.autojournaal(10.32.230.188,,8484).happy18446744073709551615 . > Happy health probes > VBE.beslist(10.32.230.188,,8484).vcls 1 . VCL > references > VBE.beslist(10.32.230.188,,8484).happy 18446744073709551615 . > Happy health probes > LCK.sms.creat 1 0.00 > Created locks > LCK.sms.destroy 0 0.00 > Destroyed locks > LCK.sms.locks 3813063 0.10 Lock > Operations > LCK.sms.colls 0 0.00 > Collisions > LCK.smp.creat 0 0.00 > Created locks > LCK.smp.destroy 0 0.00 > Destroyed locks > LCK.smp.locks 0 0.00 Lock > Operations > LCK.smp.colls 0 0.00 > Collisions > LCK.sma.creat 2 0.00 > Created locks > LCK.sma.destroy 0 0.00 > Destroyed locks > LCK.sma.locks 6898140689 176.61 Lock > Operations > LCK.sma.colls 0 0.00 > Collisions > LCK.smf.creat 0 0.00 > Created locks > LCK.smf.destroy 0 0.00 > Destroyed locks > LCK.smf.locks 0 0.00 Lock > Operations > LCK.smf.colls 0 0.00 > Collisions > LCK.hsl.creat 0 0.00 > Created locks > LCK.hsl.destroy 0 0.00 > Destroyed locks > LCK.hsl.locks 0 0.00 Lock > Operations > LCK.hsl.colls 0 0.00 > Collisions > LCK.hcb.creat 1 0.00 > Created locks > LCK.hcb.destroy 0 0.00 > Destroyed locks > LCK.hcb.locks 2740196312 70.16 Lock > Operations > LCK.hcb.colls 0 0.00 > Collisions > LCK.hcl.creat 0 0.00 > Created locks > LCK.hcl.destroy 0 0.00 > Destroyed locks > LCK.hcl.locks 0 0.00 Lock > Operations > LCK.hcl.colls 0 0.00 > Collisions > LCK.vcl.creat 1 0.00 > Created locks > LCK.vcl.destroy 0 0.00 > Destroyed locks > LCK.vcl.locks 16514901 0.42 Lock > Operations > LCK.vcl.colls 0 0.00 > Collisions > LCK.stat.creat 1 0.00 > Created locks > LCK.stat.destroy 0 0.00 > Destroyed locks > LCK.stat.locks 3722 0.00 Lock > Operations > LCK.stat.colls 0 0.00 > Collisions > LCK.sessmem.creat 1 0.00 > Created locks > LCK.sessmem.destroy 0 0.00 > Destroyed locks > LCK.sessmem.locks 210376778 5.39 Lock > Operations > LCK.sessmem.colls 0 0.00 > Collisions > LCK.wstat.creat 1 0.00 > Created locks > LCK.wstat.destroy 0 0.00 > Destroyed locks > LCK.wstat.locks 154004420 3.94 Lock > Operations > LCK.wstat.colls 0 0.00 > Collisions > LCK.herder.creat 1 0.00 > Created locks > LCK.herder.destroy 0 0.00 > Destroyed locks > LCK.herder.locks 31 0.00 Lock > Operations > LCK.herder.colls 0 0.00 > Collisions > LCK.wq.creat 16 0.00 > Created locks > LCK.wq.destroy 0 0.00 > Destroyed locks > LCK.wq.locks 25196944400 645.10 Lock > Operations > LCK.wq.colls 0 0.00 > Collisions > LCK.objhdr.creat 1371962780 35.13 > Created locks > LCK.objhdr.destroy 1367969515 35.02 > Destroyed locks > LCK.objhdr.locks 79782749795 2042.62 Lock > Operations > LCK.objhdr.colls 0 0.00 > Collisions > LCK.exp.creat 1 0.00 > Created locks > LCK.exp.destroy 0 0.00 > Destroyed locks > LCK.exp.locks 2780936064 71.20 Lock > Operations > LCK.exp.colls 0 0.00 > Collisions > LCK.lru.creat 2 0.00 > Created locks > LCK.lru.destroy 0 0.00 > Destroyed locks > LCK.lru.locks 2716690775 69.55 Lock > Operations > LCK.lru.colls 0 0.00 > Collisions > LCK.cli.creat 1 0.00 > Created locks > LCK.cli.destroy 0 0.00 > Destroyed locks > LCK.cli.locks 2227237 0.06 Lock > Operations > LCK.cli.colls 0 0.00 > Collisions > LCK.ban.creat 1 0.00 > Created locks > LCK.ban.destroy 0 0.00 > Destroyed locks > LCK.ban.locks 2781138078 71.20 Lock > Operations > LCK.ban.colls 0 0.00 > Collisions > LCK.vbp.creat 1 0.00 > Created locks > LCK.vbp.destroy 0 0.00 > Destroyed locks > LCK.vbp.locks 9013627 0.23 Lock > Operations > LCK.vbp.colls 0 0.00 > Collisions > LCK.vbe.creat 1 0.00 > Created locks > LCK.vbe.destroy 0 0.00 > Destroyed locks > LCK.vbe.locks 26485466 0.68 Lock > Operations > LCK.vbe.colls 0 0.00 > Collisions > LCK.backend.creat 3 0.00 > Created locks > LCK.backend.destroy 0 0.00 > Destroyed locks > LCK.backend.locks 2773680373 71.01 Lock > Operations > LCK.backend.colls 0 0.00 > Collisions > SMA.ram.c_req 4109962205 105.22 > Allocator requests > SMA.ram.c_fail 52455807295423 1342985.70 > Allocator failures > SMA.ram.c_bytes 27203969692807 696482.32 > Bytes allocated > SMA.ram.c_freed 27128807778988 694558.00 > Bytes freed > SMA.ram.g_alloc 8016209 . > Allocations outstanding > SMA.ram.g_bytes 75161913819 . Bytes > outstanding > SMA.ram.g_space 13861 . Bytes > available > SMA.Transient.c_req 47750773 1.22 > Allocator requests > SMA.Transient.c_fail 0 0.00 > Allocator failures > SMA.Transient.c_bytes 58471006951 1496.99 Bytes > allocated > SMA.Transient.c_freed 58470979290 1496.99 Bytes > freed > SMA.Transient.g_alloc 22 . > Allocations outstanding > SMA.Transient.g_bytes 27661 . Bytes > outstanding > SMA.Transient.g_space 0 . Bytes > available > VBE.proxy(10.x.x.x,,80).vcls 8 . VCL references > VBE.proxy(10.x.x.x,,80).happy 18446744073709551615 . Happy > health probes > --------------- > Bad varnishstat -1 > --------------- > epsvarnish_bad: > client_conn 2661600 6.43 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess/wrk > client_req 238438255 575.65 Client requests received > cache_hit 213261383 514.87 Cache hits > cache_hitpass 0 0.00 Cache hits for pass > cache_miss 25173891 60.78 Cache misses > backend_conn 397844 0.96 Backend conn. success > backend_unhealthy 18139 0.04 Backend conn. not attempted > backend_busy 0 0.00 Backend conn. too many > backend_fail 2675 0.01 Backend conn. failures > backend_reuse 24825052 59.93 Backend conn. reuses > backend_toolate 45 0.00 Backend conn. was closed > backend_recycle 24825811 59.94 Backend conn. recycles > backend_retry 70506 0.17 Backend conn. retry > fetch_head 0 0.00 Fetch head > fetch_length 25137933 60.69 Fetch with Length > fetch_chunked 10 0.00 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 0 0.00 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed 530 0.00 Fetch failed > fetch_1xx 0 0.00 Fetch no body (1xx) > fetch_204 0 0.00 Fetch no body (204) > fetch_304 0 0.00 Fetch no body (304) > n_sess_mem 4732 . N struct sess_mem > n_sess 148 . N struct sess > n_object 4003646 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 4004774 . N struct objectcore > n_objecthead 4015924 . N struct objecthead > n_waitinglist 1588 . N struct waitinglist > n_vbc 22 . N struct vbc > n_wrk 1600 . N worker threads > n_wrk_create 3068 0.01 N worker threads created > n_wrk_failed 0 0.00 N worker threads not created > n_wrk_max 0 0.00 N worker threads limited > n_wrk_lqueue 0 0.00 work request queue length > n_wrk_queued 13255 0.03 N queued work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 3 . N backends > n_expired 263264 . N expired objects > n_lru_nuked 20870512 . N LRU nuked objects > n_lru_moved 200584889 . N LRU moved objects > losthdr 8 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 236130810 570.08 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing workspace > s_sess 2661694 6.43 Total Sessions > s_req 238438255 575.65 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 0 0.00 Total pass > s_fetch 25137413 60.69 Total fetch > s_hdrbytes 117632270881 283995.98 Total header bytes > s_bodybytes 2917193607304 7042890.96 Total body bytes > sess_closed 169195 0.41 Session Closed > sess_pipeline 0 0.00 Session Pipeline > sess_readahead 0 0.00 Session Read Ahead > sess_linger 238353273 575.45 Session Linger > sess_herd 148971811 359.66 Session herd > shm_records 10788946827 26047.42 SHM records > shm_writes 446371290 1077.66 SHM writes > shm_flushes 0 0.00 SHM flushes due to overflow > shm_cont 2328506 5.62 SHM MTX contention > shm_cycles 4750 0.01 SHM cycles through buffer > sms_nreq 39446 0.10 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 18106011 . SMS bytes allocated > sms_bfree 18106011 . SMS bytes freed > backend_req 25222980 60.90 Backend requests made > n_vcl 2 0.00 N vcl total > n_vcl_avail 2 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_ban 1 . N total active bans > n_ban_gone 1 . N total gone bans > n_ban_add 1 0.00 N new bans added > n_ban_retire 0 0.00 N old bans deleted > n_ban_obj_test 0 0.00 N objects tested > n_ban_re_test 0 0.00 N regexps tested against > n_ban_dups 0 0.00 N duplicate bans removed > hcb_nolock 238293148 575.30 HCB Lookups without lock > hcb_lock 25165922 60.76 HCB Lookups with lock > hcb_insert 25165889 60.76 HCB Inserts > esi_errors 0 0.00 ESI parse errors (unlock) > esi_warnings 0 0.00 ESI parse warnings (unlock) > accept_fail 0 0.00 Accept failures > client_drop_late 0 0.00 Connection dropped late > uptime 414204 1.00 Client uptime > dir_dns_lookups 0 0.00 DNS director lookups > dir_dns_failed 0 0.00 DNS director failed lookups > dir_dns_hit 0 0.00 DNS director cached lookups hit > dir_dns_cache_full 0 0.00 DNS director full dnscache > vmods 1 . Loaded VMODs > n_gzip 0 0.00 Gzip operations > n_gunzip 0 0.00 Gunzip operations > sess_pipe_overflow 0 . Dropped sessions due to > session pipe overflow > LCK.sms.creat 1 0.00 Created locks > LCK.sms.destroy 0 0.00 Destroyed locks > LCK.sms.locks 118338 0.29 Lock Operations > LCK.sms.colls 0 0.00 Collisions > LCK.smp.creat 0 0.00 Created locks > LCK.smp.destroy 0 0.00 Destroyed locks > LCK.smp.locks 0 0.00 Lock Operations > LCK.smp.colls 0 0.00 Collisions > LCK.sma.creat 2 0.00 Created locks > LCK.sma.destroy 0 0.00 Destroyed locks > LCK.sma.locks 114658606 276.82 Lock Operations > LCK.sma.colls 0 0.00 Collisions > LCK.smf.creat 0 0.00 Created locks > LCK.smf.destroy 0 0.00 Destroyed locks > LCK.smf.locks 0 0.00 Lock Operations > LCK.smf.colls 0 0.00 Collisions > LCK.hsl.creat 0 0.00 Created locks > LCK.hsl.destroy 0 0.00 Destroyed locks > LCK.hsl.locks 0 0.00 Lock Operations > LCK.hsl.colls 0 0.00 Collisions > LCK.hcb.creat 1 0.00 Created locks > LCK.hcb.destroy 0 0.00 Destroyed locks > LCK.hcb.locks 46330459 111.85 Lock Operations > LCK.hcb.colls 0 0.00 Collisions > LCK.hcl.creat 0 0.00 Created locks > LCK.hcl.destroy 0 0.00 Destroyed locks > LCK.hcl.locks 0 0.00 Lock Operations > LCK.hcl.colls 0 0.00 Collisions > LCK.vcl.creat 1 0.00 Created locks > LCK.vcl.destroy 0 0.00 Destroyed locks > LCK.vcl.locks 175044 0.42 Lock Operations > LCK.vcl.colls 0 0.00 Collisions > LCK.stat.creat 1 0.00 Created locks > LCK.stat.destroy 0 0.00 Destroyed locks > LCK.stat.locks 2666278 6.44 Lock Operations > LCK.stat.colls 0 0.00 Collisions > LCK.sessmem.creat 1 0.00 Created locks > LCK.sessmem.destroy 0 0.00 Destroyed locks > LCK.sessmem.locks 2667059 6.44 Lock Operations > LCK.sessmem.colls 0 0.00 Collisions > LCK.wstat.creat 1 0.00 Created locks > LCK.wstat.destroy 0 0.00 Destroyed locks > LCK.wstat.locks 1316488 3.18 Lock Operations > LCK.wstat.colls 0 0.00 Collisions > LCK.herder.creat 1 0.00 Created locks > LCK.herder.destroy 0 0.00 Destroyed locks > LCK.herder.locks 11717 0.03 Lock Operations > LCK.herder.colls 0 0.00 Collisions > LCK.wq.creat 16 0.00 Created locks > LCK.wq.destroy 0 0.00 Destroyed locks > LCK.wq.locks 305048412 736.47 Lock Operations > LCK.wq.colls 0 0.00 Collisions > LCK.objhdr.creat 25167014 60.76 Created locks > LCK.objhdr.destroy 21151796 51.07 Destroyed locks > LCK.objhdr.locks 996006829 2404.63 Lock Operations > LCK.objhdr.colls 0 0.00 Collisions > LCK.exp.creat 1 0.00 Created locks > LCK.exp.destroy 0 0.00 Destroyed locks > LCK.exp.locks 46685330 112.71 Lock Operations > LCK.exp.colls 0 0.00 Collisions > LCK.lru.creat 2 0.00 Created locks > LCK.lru.destroy 0 0.00 Destroyed locks > LCK.lru.locks 46007928 111.08 Lock Operations > LCK.lru.colls 0 0.00 Collisions > LCK.cli.creat 1 0.00 Created locks > LCK.cli.destroy 0 0.00 Destroyed locks > LCK.cli.locks 88654 0.21 Lock Operations > LCK.cli.colls 0 0.00 Collisions > LCK.ban.creat 1 0.00 Created locks > LCK.ban.destroy 0 0.00 Destroyed locks > LCK.ban.locks 46686411 112.71 Lock Operations > LCK.ban.colls 0 0.00 Collisions > LCK.vbp.creat 1 0.00 Created locks > LCK.vbp.destroy 0 0.00 Destroyed locks > LCK.vbp.locks 125227 0.30 Lock Operations > LCK.vbp.colls 0 0.00 Collisions > LCK.vbe.creat 1 0.00 Created locks > LCK.vbe.destroy 0 0.00 Destroyed locks > LCK.vbe.locks 801030 1.93 Lock Operations > LCK.vbe.colls 0 0.00 Collisions > LCK.backend.creat 3 0.00 Created locks > LCK.backend.destroy 0 0.00 Destroyed locks > LCK.backend.locks 50871308 122.82 Lock Operations > LCK.backend.colls 0 0.00 Collisions > SMA.ram.c_req 71969321 173.75 Allocator requests > SMA.ram.c_fail 22016345 53.15 Allocator failures > SMA.ram.c_bytes 504148551835 1217150.37 Bytes allocated > SMA.ram.c_freed 428986641816 1035689.28 Bytes freed > SMA.ram.g_alloc 8008174 . Allocations outstanding > SMA.ram.g_bytes 75161910019 . Bytes outstanding > SMA.ram.g_space 17661 . Bytes available > SMA.Transient.c_req 368754 0.89 Allocator requests > SMA.Transient.c_fail 0 0.00 Allocator failures > SMA.Transient.c_bytes 463851431 1119.86 Bytes allocated > SMA.Transient.c_freed 463818676 1119.78 Bytes freed > SMA.Transient.g_alloc 26 . Allocations outstanding > SMA.Transient.g_bytes 32755 . Bytes outstanding > SMA.Transient.g_space 0 . Bytes available > VBE.proxy(10.x.x.x,,80).vcls 2 . VCL references > VBE.proxy(10.x.x.x,,80).happy18446744073709551615 . Happy > health probes > --------------- > > Command line > /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f > /etc/varnish/varnish.vcl -T :6082 -S /etc/varnish/secret -s ram=malloc,70G > -p thread_pool_add_delay=2 -p http_gzip_support=on -p thread_pools=16 -p > default_ttl=7200 -p thread_pool_min=100 -p thread_pool_max=4000 -p > session_linger=50 -p sess_workspace=1048576 -p shm_workspace=8192 > > Server is PowerEdge M610, 96G RAM, 2xXeon E5620+HT (16 cores). > Good system running on Debian Squeeze, Bad one - on Ubuntu Precise (yep, I > know, it was a bad idea to change varnish and OS in same time *sigh*) > > With best regards, > Denis > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignasr at vault13.lt Tue Oct 28 09:38:57 2014 From: ignasr at vault13.lt (Ignas) Date: Tue, 28 Oct 2014 10:38:57 +0200 Subject: file permission on compiled vcl .so files Message-ID: <544F5621.4010108@vault13.lt> Hello, there is this closed issue, that was reported against v3.0.1, but fixed only on v4.0.0 (if my git-fu is correct). Please, would it be possible to apply this fix to official v3 branch also? https://www.varnish-cache.org/trac/ticket/1072 https://github.com/varnish/Varnish-Cache/commit/ee439631b413cc5505e384c233ca36930cd33a70 Thank you Ignas From viktor.gunnarson at ericsson.com Tue Oct 28 12:18:25 2014 From: viktor.gunnarson at ericsson.com (Viktor Gunnarson) Date: Tue, 28 Oct 2014 11:18:25 +0000 Subject: [Varnish 4] How to purge based on regex Message-ID: Hi, I wonder how I can make Varnish 4 purge based on a regex? I've found old examples on how to do this in previous versions of Varnish but no example for Varnish 4. What I want to accomplish is basically this: I send a PURGE request: PURGE /api/news/ HTTP/1.1 This should then purge /api/news/ but also all url's with query parameters (such as /api/news/?lang=en). I have tried the following without any success: sub vcl_recv { if (req.method == "PURGE") { return (purge); } } sub vcl_hash { if (req.method == "PURGE") { #Remove ? and everything after that set req.http.purge-url = regsub(req.url, "^([/a-zA-Z0-9]*)([\?=&a-zA-Z0-9]*)", "\1" ); #Make sure the url ends with a / if(!req.http.purge-url ~ "[.]*/$"){ set req.http.purge-url = regsub(req.http.purge-url, "$", "/"); } #Make it a regex set req.http.purge-url = "^" + req.http.purge-url + ".*$"; hash_data(req.http.purge-url); return (lookup); } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From hernan at cmsmedios.com Tue Oct 28 19:23:03 2014 From: hernan at cmsmedios.com (=?UTF-8?Q?Hern=C3=A1n_Marsili?=) Date: Tue, 28 Oct 2014 15:23:03 -0300 Subject: tuning nfiles and memlock Message-ID: Hi, We have sites with around 3000 concurrentes users per box. We where using the defaults for NFILES and MEMLOCK. We tuned them to the following values and we found a 20% increase in the server traffic handling limit (consistently benchmark with Apache AB tool). NFILES=262144 MEMLOCK=204800 Does anyone has any experience regarding this? Tnx! Saludos, Hern?n. [image: logo tfs] http://www.cms-medios.com | http://blog.tfsla.com | facebook.com/cmsmedioscel +54 [911] 4945 2272 | skype hmarsili | Linkedin ar.linkedin.com/in/hmarsiliSuscribite a nuestras novedades por e-mail o RSS feed o Twitter @tfsla >> Argentina +54 11 4711-8999 | USA +1 305 722-5130 | M?xico +52 55 5350-1090 | Espa?a +34 93 179-0330 | Venezuela +58 212 335-1180 | Colombia +57 1 508-7840 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.gunnarson at ericsson.com Wed Oct 29 09:09:53 2014 From: viktor.gunnarson at ericsson.com (Viktor Gunnarson) Date: Wed, 29 Oct 2014 08:09:53 +0000 Subject: [Varnish 4] How to purge based on regex In-Reply-To: References: Message-ID: Replying to my own message. After some more research I realized that what I really want to do is to set a ban. I hadn't realized that this actually allows Varnish to fetch new content and add it to the cache, my understanding was that a ban would force Varnish to _always_ go to the backend for objects matching the ban. This is what I have added to my VCL: if (req.method == "BAN") { if (!client.ip ~ purge) { return(synth(403, "Not allowed")); } #Add the ban ban("obj.http.x-url ~ " + req.http.X-Ban); return(synth(200, "Ban added")); } Best regards, Viktor From: varnish-misc-bounces+viktor.gunnarson=ericsson.com at varnish-cache.org [mailto:varnish-misc-bounces+viktor.gunnarson=ericsson.com at varnish-cache.org] On Behalf Of Viktor Gunnarson Sent: den 28 oktober 2014 12:18 To: varnish-misc at varnish-cache.org Subject: [Varnish 4] How to purge based on regex Hi, I wonder how I can make Varnish 4 purge based on a regex? I've found old examples on how to do this in previous versions of Varnish but no example for Varnish 4. What I want to accomplish is basically this: I send a PURGE request: PURGE /api/news/ HTTP/1.1 This should then purge /api/news/ but also all url's with query parameters (such as /api/news/?lang=en). I have tried the following without any success: sub vcl_recv { if (req.method == "PURGE") { return (purge); } } sub vcl_hash { if (req.method == "PURGE") { #Remove ? and everything after that set req.http.purge-url = regsub(req.url, "^([/a-zA-Z0-9]*)([\?=&a-zA-Z0-9]*)", "\1" ); #Make sure the url ends with a / if(!req.http.purge-url ~ "[.]*/$"){ set req.http.purge-url = regsub(req.http.purge-url, "$", "/"); } #Make it a regex set req.http.purge-url = "^" + req.http.purge-url + ".*$"; hash_data(req.http.purge-url); return (lookup); } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef at scaleengine.com Wed Oct 29 19:54:33 2014 From: stef at scaleengine.com (Stefan Caunter) Date: Wed, 29 Oct 2014 14:54:33 -0400 Subject: varnish load balance - HLS In-Reply-To: <1414261581.37158.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> References: <1414261581.37158.YahooMailAndroidMobile@web160705.mail.bf1.yahoo.com> Message-ID: If you do varnishlog > log.txt and run a request you should see what is happening. Unless you can get us some output to analyze, there is not much we can do to help. ---- Stefan Caunter On Sat, Oct 25, 2014 at 2:26 PM, Laszlo Danielisz < laszlo.danielisz at yahoo.com> wrote: > I can't get any m3u8 files from varnish. > I'm using round-robin and I have one varnish for now. > From:"Stefan Caunter" > Date:Sat, Oct 25, 2014 at 10:30 > Subject:Re: varnish load balance - HLS > > Do you get m3u8 list from a varnish? It's basic HTTP object caching. Do > all varnishes return the same result/headers? What is the LB algorithm? > > > > ---- > > Stefan Caunter > ScaleEngine Inc. > > E: stefan.caunter at scaleengine.com > Skype: stefan.caunter > Toll Free Direct: +1 800 280 6042 > Toronto Canada > > On Fri, Oct 24, 2014 at 3:47 PM, Laszlo Danielisz < > laszlo.danielisz at yahoo.com> wrote: > >> Yes, I can reach the .m3u8 and .ts files from wowza with curl/wget >> >> >> On Friday, October 24, 2014 4:20 AM, Stefan Caunter < >> stef at scaleengine.com> wrote: >> >> >> Alright, is wowza set to be an HTTP origin? Can you get a .ts list from >> one varnish if you hit it directly? >> >> >> >> ---- >> >> Stefan Caunter >> ScaleEngine Inc. >> >> E: stefan.caunter at scaleengine.com >> Skype: stefan.caunter >> Toll Free Direct: +1 800 280 6042 >> Toronto Canada >> >> On Thu, Oct 23, 2014 at 3:24 PM, Laszlo Danielisz < >> laszlo.danielisz at yahoo.com> wrote: >> >> If I access Wowza directly I get the proper response, the streams are >> working from all four of the back ends. >> From:"Stefan Caunter" >> Date:Thu, Oct 23, 2014 at 18:45 >> Subject:Re: varnish load balance - HLS >> >> On Thu, Oct 23, 2014 at 10:27 AM, Andreas Plesner Jacobsen >> wrote: >> > On Thu, Oct 23, 2014 at 06:30:07AM -0700, Laszlo Danielisz wrote: >> >> >> >> I'm sending the varnishlog, I could not find anything about the >> request in it. >> > >> > Either you have very high timeouts in varnish or you're not hitting >> varnish. >> > This looks like varnish 3, so try disabling log collation with >> varnishlog -O >> > >> >> What is interesting is that varnish reports the backends sick after a >> couple >> >> seconds BUT with "HTTP 200 OK", while the .url defined in the probe is >> still >> >> up. >> > >> > Did you configure a 0 second timeout for those probes? >> >> Can we get some confirmation that curl request to the backend produces >> valid response? If wowza is not set to respond correctly to request, >> varnish is not the place to start troubleshooting. >> >> >> ---- >> >> Stefan Caunter >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From numard at gmail.com Wed Oct 29 22:42:26 2014 From: numard at gmail.com (Norberto Meijome) Date: Thu, 30 Oct 2014 08:42:26 +1100 Subject: varnish load balance - HLS In-Reply-To: <1414071726.35160.YahooMailNeo@web160705.mail.bf1.yahoo.com> References: <1414063733.36664.YahooMailNeo@web160705.mail.bf1.yahoo.com> <20141023114051.GK19870@nerd.dk> <1414071726.35160.YahooMailNeo@web160705.mail.bf1.yahoo.com> Message-ID: Hi Laszlo, >From the log you sent, it looks look your 4 wowza back ends are sick and won't get any requests. You would normally get a 503 on your client in this case unless, as suggested by someone else, you have v long timeouts on b.end config? First make sure varnish treats your back end correctly. It's been some years since I used wowza/hls , but if you are sending encrypted streams, make sure all your wowza nodes have the key and session info available - otherwise you need to have sticky sessions. This may apply ad well for non encrypted streams, wowza may have shared session info nowadays, YMMV, etc.. As suggested also, first step would make sure you see your client request in varnishlog (regardless of the success of the request).. As it stands, there's nothing there, you may be getting firewalled upstream... Good luck, On 24/10/2014 12:48 am, "Laszlo Danielisz" wrote: > Hi Andreas, > > I'm sending the varnishlog, I could not find anything about the request in > it. > What is interesting is that varnish reports the backends sick after a > couple seconds BUT with "HTTP 200 OK", while the .url defined in the probe > is still up. > > root at rdr00-cdn# varnishlog > 0 WorkThread - 0x7fffff5fabf0 start > 0 WorkThread - 0x7fffff3f9bf0 start > 0 WorkThread - 0x7fffff1f8bf0 start > 0 WorkThread - 0x7ffffeff7bf0 start > 0 WorkThread - 0x7ffffedf6bf0 start > 0 CLI - Rd vcl.load "boot" ./vcl.mY4NLfsb.so > 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend0 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend2 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 Backend_health - backend3 Still healthy ------H 0 0 0 0.000000 > 0.000000 > 0 CLI - Wr 200 36 Loaded "./vcl.mY4NLfsb.so" as "boot" > 0 CLI - Rd vcl.use "boot" > 0 CLI - Wr 200 0 > 0 CLI - Rd start > 0 WorkThread - 0x7ffffe7f3bf0 start > 0 Debug - "Acceptor is kqueue" > 0 CLI - Wr 200 0 > 0 WorkThread - 0x7ffffdbedbf0 start > 0 WorkThread - 0x7ffffd9ecbf0 start > 0 WorkThread - 0x7ffffd7ebbf0 start > 0 WorkThread - 0x7ffffd5eabf0 start > 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend2 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend0 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070353 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070356 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070360 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070363 1.0 > 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend0 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend3 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend2 Still healthy 4--X--- 3 3 5 0.000000 > 0.000000 HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070366 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070369 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070372 1.0 > 0 Backend_health - backend3 Went sick 4--X--- 2 3 5 0.000000 0.000000 > HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend0 Went sick 4--X--- 2 3 5 0.000000 0.000000 > HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend2 Went sick 4--X--- 2 3 5 0.000000 0.000000 > HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 Backend_health - backend1 Went sick 4--X--- 2 3 5 0.000000 0.000000 > HTTP/1.1 200 OK > Accept-Ranges: bytes > Server: WowzaStreamingEngine/4.1.0 > Cache-Control: no-cache > Connection: close > Content-T > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070375 1.0 > 0 CLI - Rd ping > 0 CLI - Wr 200 19 PONG 1414070378 1.0 > > > This is what I get with curl from varnish, and it times out after a while. > The firewall os off. > $ curl -v http://127.29.90.120/crossdomain.xml > * About to connect() to 127.29.90.120 port 80 (#0) > * Trying 127.29.90.120... > * Adding handle: conn: 0x7fc720803000 > * Adding handle: send: 0 > * Adding handle: recv: 0 > * Curl_addHandleToPipeline: length: 1 > * - Conn 0 (0x7fc720803000) send_pipe: 1, recv_pipe: 0 > ^C > > > On Thursday, October 23, 2014 1:43 PM, Andreas Plesner Jacobsen < > apj at mutt.dk> wrote: > > > On Thu, Oct 23, 2014 at 04:28:53AM -0700, Laszlo Danielisz wrote: > > > > > I'm trying to use varnish for load balance and cache in front of > streaming servers. > > The stream is HTTP Live streaming based which means there are a bunch of > playlists (m3u8) and chunk files (.ts). > > The thing is if I try to get a http page it is working, but I get no > response if I'm trying to play a stream? > > > Define "no response", and send a varnishlog of the failed request. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Thu Oct 30 08:28:42 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Thu, 30 Oct 2014 08:28:42 +0100 Subject: varnish load balance - HLS In-Reply-To: <1414180413.86060.YahooMailNeo@web160706.mail.bf1.yahoo.com> References: <20141023142730.GL19870@nerd.dk> <1414092326.84298.YahooMailAndroidMobile@web160701.mail.bf1.yahoo.com> <1414180413.86060.YahooMailNeo@web160706.mail.bf1.yahoo.com> Message-ID: <20141030072842.GN19870@nerd.dk> On Fri, Oct 24, 2014 at 12:53:33PM -0700, Laszlo Danielisz wrote: > Timeout is set to 1 second for each probe > 0 Backend_health - backend1 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK Please paste full config. Varnish concludes that your backend is down after 0.000000 seconds, so either it times out, or there's something it doesn't like the responses your backend sends. I'm thinking it may be the latter, so a packet dump of a probe would be nice, or perhaps just a hex dump of an http request matching the probe. > Here is a varnishlog -O While you were doing a request? If so, then your request is not reaching varnish. Check netstat and firewalls. -- Andreas From caoilte at gmail.com Thu Oct 30 11:31:20 2014 From: caoilte at gmail.com (Caoilte O'Connor) Date: Thu, 30 Oct 2014 10:31:20 +0000 Subject: Which release is bug 1561 fixed in? Message-ID: Hi, Can anyone confirm which release https://www.varnish-cache.org/trac/ticket/1561 is fixed in. It appears to be marked as fixed in 4.0.1 in September even though 4.0.1 was released in June. I also can't see it on https://www.varnish-cache.org/trac/browser/doc/changes.rst?rev=bfe7cd We're seeing it in 4.0.1 so I'm going to upgrade to 4.0.2 but would like to confirm that it is fixed there as well. Thanks, c -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.danielisz at yahoo.com Thu Oct 30 23:02:18 2014 From: laszlo.danielisz at yahoo.com (Laszlo Danielisz) Date: Thu, 30 Oct 2014 23:02:18 +0100 Subject: varnish load balance - HLS In-Reply-To: <20141030072842.GN19870@nerd.dk> References: <20141023142730.GL19870@nerd.dk> <1414092326.84298.YahooMailAndroidMobile@web160701.mail.bf1.yahoo.com> <1414180413.86060.YahooMailNeo@web160706.mail.bf1.yahoo.com> <20141030072842.GN19870@nerd.dk> Message-ID: <65193F2B-EEB3-445F-8340-120CC3B35222@yahoo.com> It turned out that by changing the timeout the backends stays healthy! > On 30 Oct 2014, at 08:28, Andreas Plesner Jacobsen wrote: > >> On Fri, Oct 24, 2014 at 12:53:33PM -0700, Laszlo Danielisz wrote: >> >> Timeout is set to 1 second for each probe >> 0 Backend_health - backend1 Still healthy 4--X--- 3 3 5 0.000000 0.000000 HTTP/1.1 200 OK > > Please paste full config. Varnish concludes that your backend is down after > 0.000000 seconds, so either it times out, or there's something it doesn't like > the responses your backend sends. > I'm thinking it may be the latter, so a packet dump of a probe would be nice, > or perhaps just a hex dump of an http request matching the probe. > >> Here is a varnishlog -O > > While you were doing a request? If so, then your request is not reaching > varnish. Check netstat and firewalls. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From Stuart.Beck at mnetmobile.com Fri Oct 31 05:45:10 2014 From: Stuart.Beck at mnetmobile.com (Beck, Stuart (ADE-MNT)) Date: Fri, 31 Oct 2014 04:45:10 +0000 Subject: Consistent panic in http1_cleanup Message-ID: <1414730707.31302.176.camel@ademnt-dlx0001.ap.corp.ipgnetwork.com> Hi, I am evaluating the varnish cache in our environment in order to alleviate load on backend servers. I'm running with the following parameters on a solaris11.1 server sudo /opt/varnish-4.0.2/sbin/varnishd -a :6081 -u varnishuser -g varnishgroup -n /data/varnish/cache -f /data/varnish/conf.d/default.vcl -s malloc,256m -p 'cc_command=exec cp /opt/varnish-4.0.2/config/varnish-config.so %o' (the copy option is to avoid having to install a compiler on the server the cache is running on as the application is built elsewhere) I'm finding that the child process regularly restarts with the following panic message being logged into the messages file, this can be after a few minutes or a few hours. I'm wondering if this is likely to be a bug or possibly something to do with my environment It does seem to be similar to https://www.varnish-cache.org/trac/ticket/1552 but that was in a separate function. Oct 31 01:45:56 t3 /data/varnish/cache[29402]: [ID 232431 local0.error] Child (29404) Panic message: Oct 31 12:45:56 t3 Assert error in http1_cleanup(), cache/cache_http1_fsm.c line 207: Oct 31 12:45:56 t3 Condition((req->vsl->wid) != 0) not true. Oct 31 12:45:56 t3 errno = 131 (Connection reset by peer) Oct 31 12:45:56 t3 thread = (cache-worker) Oct 31 12:45:56 t3 ident = -smalloc,-smalloc,-hcritbit,ports Oct 31 12:45:56 t3 Backtrace: Oct 31 12:45:56 t3 80882f2: /opt/varnish-4.0.2/sbin/varnishd'pan_ic+0xd2 [0x80882f2] Oct 31 12:45:56 t3 8080c43: /opt/varnish-4.0.2/sbin/varnishd'http1_cleanup+0x3b3 [0x8080c43] Oct 31 12:45:56 t3 80813a7: /opt/varnish-4.0.2/sbin/varnishd'HTTP1_Session+0x467 [0x80813a7] Oct 31 12:45:56 t3 8091d24: /opt/varnish-4.0.2/sbin/varnishd'ses_req_pool_task+0x64 [0x8091d24] Oct 31 12:45:56 t3 808b79b: /opt/varnish-4.0.2/sbin/varnishd'Pool_Work_Thread+0x4bb [0x808b79b] Oct 31 12:45:56 t3 80a1fe5: /opt/varnish-4.0.2/sbin/varnishd'wrk_thread_real+0xc5 [0x80a1fe5] Oct 31 12:45:56 t3 fe43444c: /lib/libc.so.1'_thrp_setup+0x9d [0xfe43444c] Oct 31 12:45:56 t3 fe4346f0: /lib/libc.so.1'_lwp_start+0x0 [0xfe4346f0] Oct 31 12:45:56 t3 req = 8219030 { Oct 31 12:45:56 t3 sp = 8212cd0, vxid = 0, step = R_STP_RESTART, Oct 31 12:45:56 t3 req_body = R_BODY_INIT, Oct 31 12:45:56 t3 restarts = 0, esi_level = 0 Oct 31 12:45:56 t3 My VCL file is just the example basic pass-through to the local apache server as follows (comments removed): vcl 4.0; backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { } sub vcl_backend_response { } sub vcl_deliver { } -- [cid:1414729218.31302.161.camel at ademnt-dlx0001.ap.corp.ipgnetwork.com] Stuart Beck Systems Administrator P +61 8 8115 6649 E stuart.beck at mnetmobile.com A Level 1, 16 Anster Street, Adelaide SA 5000 [cid:1414729218.31302.162.camel at ademnt-dlx0001.ap.corp.ipgnetwork.com] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: AwardFooter_crisp01.png Type: image/png Size: 25524 bytes Desc: AwardFooter_crisp01.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3554 bytes Desc: image001.png URL: