Skip to content

Conversation

@zirain
Copy link
Member

@zirain zirain commented Dec 3, 2025

@zirain zirain requested a review from a team as a code owner December 3, 2025 07:06
@istio-testing istio-testing added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Dec 3, 2025
@istio-testing istio-testing added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 3, 2025
@istio-testing istio-testing added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 5, 2025
@istio-testing
Copy link
Collaborator

istio-testing commented Dec 5, 2025

@zirain: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
test-asan_proxy 965e35d link true /test test-asan
release-test_proxy 965e35d link true /test release-test
release-test-arm64_proxy 965e35d link true /test release-test-arm64

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@krinkinmu
Copy link

@zirain I chatted with @phlax a bit and he mentioned that there are some reasons to no use hermetic llvm toolchain, for my understanding, can you elaborate why not just go with hermetic toolchain?

FWIW, I have PR that seem to make proxy build with hermetic LLVM toolchain work: #6726 (e.g. it builds locally and on CI passes the tests).

@phlax
Copy link

phlax commented Dec 5, 2025

relatedly - i have most of the issues resolved that were making the non-hermetic build fail - but i definitely agree - it make a lot more sense to just use the hermetic toolchain

@kyessenov
Copy link
Contributor

If there is a will, we should just move these extensions to envoy and use an official build. There were some arguments about keeping old glibc working, but same applies to Envoy, and honestly, that's a distributors problem, not an OSS project's problem.

@phlax
Copy link

phlax commented Dec 5, 2025

wrt glibc - as we now have separate sysroots - building for earlier glibc should be fairly easy

the blocker to doing this in envoy is that it risks multiplying builds/tests/artifacts

@kyessenov
Copy link
Contributor

@phlax yeah, that's where it got stuck last time. i'd be best to use the standard build (not contrib) but that requires bringing the extensions here up to standards

@krinkinmu
Copy link

@phlax yeah, that's where it got stuck last time. i'd be best to use the standard build (not contrib) but that requires bringing the extensions here up to standards

@kyessenov just for my understanding, the idea is to take an upstream Envoy image and use it as a base and add to it pilot-agent (in the ideal world, once all things are moved to upstream Envoy)?

@kyessenov
Copy link
Contributor

@krinkinmu Yeah, or just add pilot-agent to it. It reduces toil IMO. There's nothing special added in istio org that is not there in upstream AFAIK, and debugging workflows would benefit from using the upstream tools.

@krinkinmu
Copy link

Will dump it here in case it will be useful at some point in the future. Currently upstream Envoy is built using Ubuntu Focal as a container when building, with Ubuntu Focal we should have glibc 2.27 (or some Ubuntu patch on top of that version). In Istio it seems that we are using Ubuntu Bionic, which comes with glibc 2.31.

So going with the upstream Envoy (if we don't change anything), would require bumping glibc version.

Ubuntu Bionic is way past EOL at this point - my understanding is that unless you have a subscription, you don't actually get access to fixes for the base system (which includes glibc).

As for why we are concerned about glibc version, I don't really fully understand, most of the things are packaged to a docker container together with the version of glibc needed for the binaries, so it should not matter. One place, where there might be an issue is Istio CNI, because it's my understanding that a binary from that image is expected to be run directly on the node outside the container.

Are there any other concerns with bumping glibc version?

@phlax
Copy link

phlax commented Dec 5, 2025

Currently upstream Envoy is built using Ubuntu Focal as a container when building, with Ubuntu Focal we should have glibc 2.27 (or some Ubuntu patch on top of that version). In Istio it seems that we are using Ubuntu Bionic, which comes with glibc 2.31.

bionic == 2.27
focal == 2.31

istio is building with the older version to increase support surface

small correction to the rest - envoy now builds with debian trixie - but uses sysroot from bullseye (which is equiv to focal)

      env:
        PPA_TOOLCHAIN_VERSION: focal
        DEBIAN_VERSION: bullseye
        STDCC_VERSION: 13
        GLIBC_VERSION: 2.31

As for why we are concerned about glibc version

static binary will only work when compiled against particular versions - hence we no longer support many versions of distros

@krinkinmu
Copy link

static binary will only work when compiled against particular versions - hence we no longer support many versions of distros

@phlax that might be the case for glibc, but there are other implementations of libc that actually do allow to build a trully static binraries that would not have that problem. Still, statically linked or not, when we package it to a container we package it with glibc, so it will have everything it needs to work.

@phlax
Copy link

phlax commented Dec 5, 2025

Still, statically linked or not, when we package it to a container we package it with glibc, so it will have everything it needs to work.

not everyone uses our containers and/or glibc version - tbf its mostly redhat users, i think due to longer support cycles - but we get a lot of complaints about the binary not working with older distros - and historically maintained build containers etc so they could build for those versions

fwiw we have some distro tests that are mostly just checking this

switching to musl is an option - one that has been suggested historically - not totally against, but iiuc there are some tradeoffs

@kyessenov
Copy link
Contributor

@krinkinmu The complaint about having to use new glibc was from two cases 1) RedHat (maybe ask them again), 2) Istio used to support VMs with raw binaries.

@howardjohn
Copy link
Member

howardjohn commented Dec 5, 2025 via email

@kyessenov
Copy link
Contributor

Istio still supports VMs with raw binaries and it's the only documented and tested approach. Previously we had discussed removing old glibc was dependant on someone owning docker-VM docs/testing which no one has done (afaik anyhow)

On Fri, Dec 5, 2025, 12:10 PM Kuat @.> wrote: kyessenov left a comment (istio/proxy#6714) <#6714 (comment)> @krinkinmu https://github.com/krinkinmu The complaint about having to use new glibc was from two cases 1) RedHat (maybe ask them again), 2) Istio used to support VMs with raw binaries. — Reply to this email directly, view it on GitHub <#6714 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXKROQY6RRSVJAQ4L5D4AHRE3AVCNFSM6AAAAACN4GDKYCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMJYGQZDSMJQGA . You are receiving this because you are subscribed to this thread.Message ID: @.>

There you go. So basically someone needs to own moving "raw VM" case to use containers and until then Istio stuck with Bionic forever AFACT. I personally don't know anyone who would use Istio like that.

@phlax
Copy link

phlax commented Dec 5, 2025

so moving forward - i think old glibc is a valid use case - im happy to add a sysroot for that in toolshed - which can then be used with a hermetic toolchain

if there is not a good reason otherwise - i strongly recommend that the ci here just uses the hermetic toolchain - sysroot compat dependent obv

the other option as @kyessenov has suggested is that we publish what is needed by istio in envoy

i guess this means adding whatever additional extensions are required - and perhaps additional glibc toolchains

mho is that the extensions probably should live in envoy itself - not sure wrt testing/docs/etc or publishing of istio-specific bins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants