-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Fix envoy build #6714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix envoy build #6714
Conversation
66fd3b2 to
a0820ab
Compare
5ddeca2 to
d2133c2
Compare
|
@zirain: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
@zirain I chatted with @phlax a bit and he mentioned that there are some reasons to no use hermetic llvm toolchain, for my understanding, can you elaborate why not just go with hermetic toolchain? FWIW, I have PR that seem to make proxy build with hermetic LLVM toolchain work: #6726 (e.g. it builds locally and on CI passes the tests). |
|
relatedly - i have most of the issues resolved that were making the non-hermetic build fail - but i definitely agree - it make a lot more sense to just use the hermetic toolchain |
|
If there is a will, we should just move these extensions to envoy and use an official build. There were some arguments about keeping old glibc working, but same applies to Envoy, and honestly, that's a distributors problem, not an OSS project's problem. |
|
wrt glibc - as we now have separate sysroots - building for earlier glibc should be fairly easy the blocker to doing this in envoy is that it risks multiplying builds/tests/artifacts |
|
@phlax yeah, that's where it got stuck last time. i'd be best to use the standard build (not contrib) but that requires bringing the extensions here up to standards |
@kyessenov just for my understanding, the idea is to take an upstream Envoy image and use it as a base and add to it pilot-agent (in the ideal world, once all things are moved to upstream Envoy)? |
|
@krinkinmu Yeah, or just add pilot-agent to it. It reduces toil IMO. There's nothing special added in istio org that is not there in upstream AFAIK, and debugging workflows would benefit from using the upstream tools. |
|
Will dump it here in case it will be useful at some point in the future. Currently upstream Envoy is built using Ubuntu Focal as a container when building, with Ubuntu Focal we should have glibc 2.27 (or some Ubuntu patch on top of that version). In Istio it seems that we are using Ubuntu Bionic, which comes with glibc 2.31. So going with the upstream Envoy (if we don't change anything), would require bumping glibc version. Ubuntu Bionic is way past EOL at this point - my understanding is that unless you have a subscription, you don't actually get access to fixes for the base system (which includes glibc). As for why we are concerned about glibc version, I don't really fully understand, most of the things are packaged to a docker container together with the version of glibc needed for the binaries, so it should not matter. One place, where there might be an issue is Istio CNI, because it's my understanding that a binary from that image is expected to be run directly on the node outside the container. Are there any other concerns with bumping glibc version? |
bionic == 2.27 istio is building with the older version to increase support surface small correction to the rest - envoy now builds with debian trixie - but uses sysroot from bullseye (which is equiv to focal) env:
PPA_TOOLCHAIN_VERSION: focal
DEBIAN_VERSION: bullseye
STDCC_VERSION: 13
GLIBC_VERSION: 2.31
static binary will only work when compiled against particular versions - hence we no longer support many versions of distros |
@phlax that might be the case for glibc, but there are other implementations of libc that actually do allow to build a trully static binraries that would not have that problem. Still, statically linked or not, when we package it to a container we package it with glibc, so it will have everything it needs to work. |
not everyone uses our containers and/or glibc version - tbf its mostly redhat users, i think due to longer support cycles - but we get a lot of complaints about the binary not working with older distros - and historically maintained build containers etc so they could build for those versions fwiw we have some distro tests that are mostly just checking this switching to musl is an option - one that has been suggested historically - not totally against, but iiuc there are some tradeoffs |
|
@krinkinmu The complaint about having to use new glibc was from two cases 1) RedHat (maybe ask them again), 2) Istio used to support VMs with raw binaries. |
|
Istio still supports VMs with raw binaries and it's the only documented and
tested approach. Previously we had discussed removing old glibc was
dependant on someone owning docker-VM docs/testing which no one has done
(afaik anyhow)
…On Fri, Dec 5, 2025, 12:10 PM Kuat ***@***.***> wrote:
*kyessenov* left a comment (istio/proxy#6714)
<#6714 (comment)>
@krinkinmu <https://github.com/krinkinmu> The complaint about having to
use new glibc was from two cases 1) RedHat (maybe ask them again), 2) Istio
used to support VMs with raw binaries.
—
Reply to this email directly, view it on GitHub
<#6714 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEYGXKROQY6RRSVJAQ4L5D4AHRE3AVCNFSM6AAAAACN4GDKYCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJYGQZDSMJQGA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
There you go. So basically someone needs to own moving "raw VM" case to use containers and until then Istio stuck with Bionic forever AFACT. I personally don't know anyone who would use Istio like that. |
|
so moving forward - i think old glibc is a valid use case - im happy to add a sysroot for that in toolshed - which can then be used with a hermetic toolchain if there is not a good reason otherwise - i strongly recommend that the ci here just uses the hermetic toolchain - sysroot compat dependent obv the other option as @kyessenov has suggested is that we publish what is needed by istio in envoy i guess this means adding whatever additional extensions are required - and perhaps additional glibc toolchains mho is that the extensions probably should live in envoy itself - not sure wrt testing/docs/etc or publishing of istio-specific bins |
xref: envoyproxy/envoy#39679