From scan-admin at coverity.com Sun Apr 5 11:46:27 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 05 Apr 2020 11:46:27 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5e89c5131751b_10422ace8f766f50728bc@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3Dez0T_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2FJiddSgSp8FyU9IW7N-2FKBg88gt8jovfxgA-2BOCimDVD6xcP-2FP-2FJjAFpKjZn-2BV5W3NY1TIMigd2znb4oQYKxbtwrszuQwu-2FHQzmfIW60anohRbX7QMZeT6f9s4qSWR5bemQK6iHtqC-2BjIWZysXMrioWLMtLwLNb89LcKwlqDEoID3J4PrdjXyyYMqo0Gg5eE3dc-3D Build ID: 305491 Analysis Summary: New defects found: 1 Defects eliminated: 19 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u2389337.ct.sendgrid.net/ls/click?upn=QsMnDxMCOVVs7CDlyD2jouKTgNlKFinTRd3y-2BJC7sZryfVdWHH2BBU620aHLHGfhMXPTHYY5wQ5zOiTMnTlWDg-3D-3DBVHy_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2FJiddSgSp8FyU9IW7N-2FKBg88gt8jovfxgA-2BOCimDVD62I1q84sj4Iv5kxbLU8dVUXGnL-2Fsz8L4FiYnc-2BnsVM1ooKtF3Fl95jRv-2BRktrMgM5Yr8yJr37tW6UcDL7doR88FQtEXOVjKTOA33MzvQgDnRBB5b0jaDJITVbMLB-2FKDk9COW18YovhzFNnmvG-2FFYQMA-3D From emilio.fernandes70 at gmail.com Wed Apr 15 11:33:22 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Wed, 15 Apr 2020 14:33:22 +0300 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< martin.grigorov at gmail.com>) escribi?: > Hello, > > Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263 > I will add some more documentation about the new setup. > Any feedback is welcome! > Nice work, Martin! Gracias! Emilio > > Regards, > Martin > > On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov > wrote: > >> Hi, >> >> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> is that script running as root? >>> >> >> Yes. >> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' >> arguments but it still doesn't work. >> The x86 build is OK. >> It must be something in the base docker image. >> I've disabled the Alpine aarch64 job for now. >> I'll send a PR tomorrow! >> >> Regards, >> Martin >> >> >>> -- >>> Guillaume Quintard >>> >>> >>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' >>>> and the results from both are shared in the workspace for the actual >>>> packing jobs. >>>> Now the new error for aarch64-apk job is: >>>> >>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>> ]0; DEBUG: 4 >>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>> >>> WARNING: varnish: No maintainer >>>> >>> varnish: Analyzing dependencies... >>>> 0% % >>>> ############################################>>> varnish: Installing for >>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>> Waiting for repository lock >>>> ERROR: Unable to lock database: Bad file descriptor >>>> ERROR: Failed to open apk database: Bad file descriptor >>>> >>> ERROR: varnish: builddeps failed >>>> ]0; >>> varnish: Uninstalling dependencies... >>>> Waiting for repository lock >>>> ERROR: Unable to lock database: Bad file descriptor >>>> ERROR: Failed to open apk database: Bad file descriptor >>>> >>>> Google suggested to do this: >>>> rm -rf /var/cache/apk >>>> mkdir /var/cache/apk >>>> >>>> It fails at 'abuild -r' - >>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>> >>>> Any hints ? >>>> >>>> Martin >>>> >>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> So, you are pointing at the `dist` job, whose sole role is to provide >>>>> us with a dist tarball, so we don't need that command line to work for >>>>> everyone, just for that specific platform. >>>>> >>>>> On the other hand, >>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>> can see that it has the `--with-unwind` argument. >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> >>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Compare your configure line with what's currently in use (or the >>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>> etc.) That need to be set >>>>>>> >>>>>> >>>>>> The configure line comes from "./autogen.des": >>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>> It is called at: >>>>>> >>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>> In my branch at: >>>>>> >>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>> >>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>> Alpine is fine. >>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Guillaume, >>>>>>>>> >>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Martin, >>>>>>>>>> >>>>>>>>>> Thank you for that. >>>>>>>>>> A few remarks and questions: >>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>> need to change very often. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>> At the moment, with 'machine' executor it fetches the base image >>>>>>>>> and then builds all the Docker layers again and again. >>>>>>>>> Here are the timings: >>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>> 5) build packages >>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>> >>>>>>>>> >>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The >>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I will extract the common steps once I see it working. This is my >>>>>>>>> first CircleCI project and I still find my ways in it! >>>>>>>>> >>>>>>>>> >>>>>>>>>> - do we want to change things for the amd64 platforms for the >>>>>>>>>> sake of consistency? >>>>>>>>>> >>>>>>>>> >>>>>>>>> So far there is nothing specific for amd4 or aarch64, except the >>>>>>>>> base Docker images. >>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>> >>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>> Request for more comments! >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Guillaume Quintard >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>> >>>>>>>>>>>>>> So, everything is in place, expect for the third point. At >>>>>>>>>>>>>> the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>> >>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers >>>>>>>>>>>>>> (for now?), so we would need to re-implement the packaging logic in Travis. >>>>>>>>>>>>>> It's not a big problem, but it's currently not a priority on my side. >>>>>>>>>>>>>> >>>>>>>>>>>>>> However, I am totally ready to provide help if someone wants >>>>>>>>>>>>>> to take that up. The added benefit it that Travis would be able to handle >>>>>>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I will take a look in the coming days and ask you if I need >>>>>>>>>>>>> help! >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>>> >>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>>>>>> build steps >>>>>>>>>>>> It will look something like >>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>> The RPM and DEB build related code from current config.yml will >>>>>>>>>>>> be extracted into shell scripts which will be copied in the custom Docker >>>>>>>>>>>> images >>>>>>>>>>>> >>>>>>>>>>>> From these two possible ways I have better picture in my head >>>>>>>>>>>> how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>>>>> prefer. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' executor >>>>>>>>>>> with QEMU. >>>>>>>>>>> >>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>> the build at >>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS >>>>>>>>>>> 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>> TODOs: >>>>>>>>>>> - migrate Alpine >>>>>>>>>>> >>>>>>>>>> >>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>> ... >>>>>>>> automake: this behaviour will change in future Automake versions: >>>>>>>> they will >>>>>>>> automake: unconditionally cause object files to be placed in the >>>>>>>> same subdirectory >>>>>>>> automake: of the corresponding sources. >>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>>>>>>> multiply defined in condition TRUE ... >>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>> automake_boilerplate.am' included from here >>>>>>>> + autoconf >>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>> + export CONFIG_SHELL >>>>>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>>>>> --enable-maintainer-mode --enable-developer-warnings >>>>>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>>>>> --with-persistent-storage --quiet >>>>>>>> configure: WARNING: dot not found - build will fail if svg files >>>>>>>> are out of date. >>>>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>>>> configure: error: Could not find backtrace() support >>>>>>>> >>>>>>>> Does anyone know a workaround ? >>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>> - anything else that is still missing >>>>>>>>>>> >>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>> >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>> 2.1) problems >>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>> >>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>>>>> ARM64 runners >>>>>>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>>>>>> the runner only for commits against >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>> >>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>> >>>>>>>>>>>> Regards, >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Regards, >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>> >>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Sun Apr 19 11:46:59 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 19 Apr 2020 11:46:59 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5e9c3a336bf1a_19e12ac85a6cef4c1e1@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DAB3F_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49xWEwYvdMGZvSXkyvrIn-2Ft5gjuz47IwDPyRi9O7A0m7wXEg-2B9wm7WgAWf4kJROrNuRwJZAAwc14-2Bnz2EIMBa10S4XLX2b92ywKxxP1Injuq1TM77iuqb2md84QCCXuVnr9GT6j4pQBp8GhHZHVqWtJnr-2BMi-2FbSFPi6nPtb3m-2BM-2BI2KC3FlfpR4fSFkwzhjzm2r3hog0fY3WIrNRknq-2FY1W Build ID: 308116 Analysis Summary: New defects found: 0 Defects eliminated: 0 From phk at phk.freebsd.dk Mon Apr 20 09:44:47 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 20 Apr 2020 09:44:47 +0000 Subject: VSB_quote has gotten messy Message-ID: <85270.1587375887@critter.freebsd.dk> I finally got around to look at VSB_QUOTE_GLOB feature Guillaume committed by accident some time ago, and it doesn't work correctly as far as I can tell, for instance, the difference between inputs: [] and ["] makes no sense to me. However, I can hardly blame Guillaume, because it is not very consistent or clear how VSB_QUOTE is supposed to work in the first place, I just spent 4 hours trying to find out, because we sort of made it up as we went. I propose that b0d1a40f326f... gets backed out before it has any use in the tree, and put an errata on the 6.4 release page to the effect of "do not use VSB_QUOTE_GLOB". I also propose that we should deprecate VSB_quote*() in its current form, ie: leave around for the benefit of VMODers for 7.x, remove in 8.x. Finally, I propose a new and more well thought, and better documented replacement, VSB_encode(), to be added shortly. Comments ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dridi at varni.sh Mon Apr 20 12:12:15 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 20 Apr 2020 12:12:15 +0000 Subject: VSB_quote has gotten messy In-Reply-To: <85270.1587375887@critter.freebsd.dk> References: <85270.1587375887@critter.freebsd.dk> Message-ID: On Mon, Apr 20, 2020 at 9:45 AM Poul-Henning Kamp wrote: > > I finally got around to look at VSB_QUOTE_GLOB feature Guillaume committed > by accident some time ago, and it doesn't work correctly as far as I > can tell, for instance, the difference between inputs: > [] > and > ["] > makes no sense to me. > > However, I can hardly blame Guillaume, because it is not very > consistent or clear how VSB_QUOTE is supposed to work in the first > place, I just spent 4 hours trying to find out, because we sort of > made it up as we went. As discussed previously, it was also on my review queue, and still is. > I propose that b0d1a40f326f... gets backed out before it has any > use in the tree, and put an errata on the 6.4 release page to the > effect of "do not use VSB_QUOTE_GLOB". Agreed. > I also propose that we should deprecate VSB_quote*() in its current > form, ie: leave around for the benefit of VMODers for 7.x, remove > in 8.x. No opinion, but if we replace the old API with a new one, maybe we can manage to translate old API calls to new ones? > Finally, I propose a new and more well thought, and better documented > replacement, VSB_encode(), to be added shortly. > > Comments ? One annoying limitation with the current quote logic is that calls must be self-contained. If I want to quote a string that will come in multiple parts, I have to assemble it (eg. with a VSB!) prior to feeding it to VSB_quote(). If we land a new API, it would be nice to be able to delimit begin and end steps, either with flags or more functions: AZ(VSB_encode_begin(vsb, VSB_ENCODE_JSON)); /* adds the leading quote */ /* encode a VCL_STRANDS in a loop */ AZ(VSB_encode_end(vsb); /* adds the trailing quote */ If we move struct strands to libvarnish, we can also have VSB_encode_strands(), but being able to encode multiple string components independently of struct strands would be appreciated. Dridi From scan-admin at coverity.com Sun Apr 26 11:47:50 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 26 Apr 2020 11:47:50 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5ea574e5c15d0_50122ac85a6cef4c1a0@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DAkjz_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je48goZfj6Cfv12G-2BifX66Oz5qD2pzLQN4-2BUn0TwWbtghiXg8SQK5R-2BOp7LGMVAHDThdWghZLKkpX9Tjy2YzfT1JWqXk2JsvSmuqBxWenuMCgZOCK8HniSlXDfsew12ktL2EShC5pXSnH5A4qprn8oWgu67kKgvfNxjB4c3sU-2F-2FTlUDMsa8BjdhEYPxp-2B2cvIgYEHKbIXHlk4V7trSvS0usIQ Build ID: 309707 Analysis Summary: New defects found: 0 Defects eliminated: 0