Skip to content

Conversation

@sebthom
Copy link
Member

@sebthom sebthom commented Nov 17, 2025

This avoids unnecessary calls to the ClearlyDefined API which constantly fails with rate limit errors.

@sebthom sebthom requested a review from akurtakov November 17, 2025 19:00
@vrubezhny
Copy link
Contributor

Please be careful with it, because some transitive dependencies may change their version significantly between two WWD builds even if our own package.json/pom.xml doesn't change.

@sebthom
Copy link
Member Author

sebthom commented Nov 17, 2025

I don't think this is possible for package json. That's what the lock file is for. The only case where I see this could happen is by referencing the latest orbit repo in the target file but I would expect that everything in orbit is vetted.

Btw, it was an advice from Eclipse CBI to not run the license check if dependencies don't change.

@akurtakov
Copy link
Contributor

It has happened multiple times with package.json as we don't commit the lock file , nor we want to do that, to prevent extra commits way too often. Note: If lock file is committed and ranged dependency is updated on npm withing the range this was marking the build dirty and thus failing it.
With the above said - a decision has to be made - whether to accept non vetted content ending in a build or constantly suffer with dash tool insane instability. What I would like to see is actually having dash tool run at least at some period (one per week?) in a very visible way and once it fails it should fail put the note on every PR that merging is not safe. How much effort such a thing I have no clue and it might not be worth it.

@mickaelistria
Copy link
Contributor

Btw, it was an advice from Eclipse CBI to not run the license check if dependencies don't change.

Checking whether dependencies changed is actually a more complex work than calling CBI dash-analysis (which is more or less just querying a database/bugtracker). It would require, additionally to a local build, fetching the last published snapshot of wildwebdeveloper, extra the package-lock.json out of it and compare the meaningful parts of it with what was just built.
I would expect the CBI dash service to be implemented well enough to support being "free" when querying some already used dependencies. If not, then the problem is IMO more to fix on CBI end than on every consumer having to set up some dependency diff mechanism.

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

I actually think we should commit the lockfile. I forgot about that. Dependabot notifies about direct dependencies in package.json so even if we commit the lockfile there won't be more PRs coming from it.

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

I would expect the CBI dash service to be implemented well enough to support being "free" when querying some already used dependencies. If not, then the problem is IMO more to fix on CBI end than on every consumer having to set up some dependency diff mechanism.

@mickaelistria maybe you want to chime in on here eclipse-dash/dash-licenses#448 ?

@vrubezhny
Copy link
Contributor

I don't think this is possible for package json. That's what the lock file is for.

We don't keep the lock file between the builds. This makes it to update the transitive deps on every build

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

I don't think this is possible for package json. That's what the lock file is for.

We don't keep the lock file between the builds. This makes it to update the transitive deps on every build

Yes @akurtakov already pointed that out to me. I believe we should actually keep the lockfiles for reproducible builds.

@mickaelistria
Copy link
Contributor

I believe we should actually keep the lockfiles for reproducible builds.

I strongly believe that reacting to upstream changes is more important than guaranteeing reproducible builds.
But if one wants a reproducible build, one possibility to achieve that would be to add the content of the package-lock.json to the tag description so one could re-add the file if they want to reproduce a particular tag.
But in reality, although I started my carreer on Eclipse making reproducible builds, I never had to reproduce any build... So IMO, the requirement for a reproducible build is a chimera. Focusing on the ease to maintain source code, and with it dependency evolution has always been more profitable from my perspective(s)..

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

One problem with not having lockfiles is that as soon as you create a release build, the bundled transitive dependencies may have changed underneath you, potentially introducing bugs and inconsistencies. You might have been happily working with a snapshot build locally the whole time, but with different transitive dependencies than the ones used in the release build, so things suddenly break. Especially in the Node.js ecosystem, semver is often not strictly followed despite the usage of semver like looking versions everywhere. In the JavaScript world it is standard practice to commit the lockfile; I do not see why we should not do the same.

Not having a lockfile also makes it harder to reproduce issues that users report: we cannot just check out the tag and start the debugger, we'd have to download the released binaries as they are built.

Having a lockfile would also help with the Dash rate limit errors. The underlying capacity problem does not seem likely to be solved soon, especially as the Eclipse Foundation currently seems to put the responsibility on individual project teams to workaround it, even though Dash is a mandatory central service.

@mickaelistria
Copy link
Contributor

From my POV, all those problems you mention are either potential ones, and/or less important for the continued success of the project than having every build pick automatically new version of dependencies (which can possible have regressions, but much more probably have bugfixes or other improvements).
There days, no contributor has to care about updating the transitive dependencies and we immediately benefit from bugfixes with 0 effort. That's actually great, greater IMO than having reproducible builds. Reproducible builds are significantly more constraining, less agile and more expensive to maintain. They are IMO not a project concern, but a product one.
As somehow, reproducible builds might be only a Long Term Maintenance problem. Do we want to bother with that kind of problems in the project when there is no-one offering Long Term Maintenance for it, nor for most of its intenral components anyway?

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

The "always latest means fewer bugs" argument does not convince me. New transitive versions can introduce bugs or security problems just as much as they can fix them. With a lockfile we still pick up those updates, but on our terms, as a reviewable change, and we can bisect or roll back the exact dependency change if something goes wrong.

Checked-in lockfiles are also a big win for security. Dependabot and GitHub's security scanners use the lockfile to raise security alerts and PRs for vulnerable resolved versions, not just for direct dependencies or loose ranges in package.json. That makes it much easier to notice and fix security issues in the exact dependency set that ends up in our builds.

And as said before, a stable dependency graph can help us right now with the annoying dash rate limit issue that, for example, made pretty much every build fail last week.

@mickaelistria
Copy link
Contributor

The "always latest means fewer bugs" argument does not convince me. New transitive versions can introduce bugs or security problems just as much as they can fix them

Well, we could easily debate it, but overall, software -like most immaterial work- mostly sees its quality increasing. Sure, every change is a risk, but every components tries to mitigate this risk, and the amount of prevention techniques constantly grow, and every change exists as its supposed to increase quality.

With a lockfile we still pick up those updates, but on our terms, as a reviewable change

So far, everyone has been happy without having to review those changes. There was no need for it.
But if you're willing and available to take care of that, why not?

and we can bisect or roll back the exact dependency change if something goes wrong.

In the current state, when something goes wrong, then the related commit is not applied and we don't publish a new build. Is something being wrong in the dependency chain doesn't cascade to some incorrect build, it "freezes" project development as fixing the dependency issue becomes the priority.
And that has happened very rarely as far as I know (@vrubezhny and @akurtakov can confirm/infirm that as they've dealt with updates more often than I).
And when this happens, a remediation is often to wait for a new version of some other dependency to appear and the issue gets fixed just like it appeared.

And as said before, a stable dependency graph can help us right now with the annoying dash rate limit issue that, for example, made pretty much every build fail last week.

Taking project design decisions based on (transient) limitations of a transitive tool is IMO often leading to misplaced effort. Pressing the Eclipse Foundation to make the infrastructure tools the offer more reliable (ideally by invoking the committer representative to ensure this escalates fast enough) would lead to a more profitable result, for the whole ecosystem.

All that said, I have to say that if you commit in ensuring Wild Web Developer gets its dependencies as up-to-date with all its internal components as they are now, I am not against someone else doing more work. I just would personally strongly dealing with such tasks which seem of very low profit for the ecosystem. But if a committer sees a benefit for them and is doing things so that project doesn't suffer, then no-one has real arguments to block them.
In worst case, someone -maybe you ;)- will remove the package-lock.json and re-add it to .gitignore later.

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

What are your thoughts on the missed out security checks of transitive npm dependencies (which would be solved by checking in the lock file)? I actually haven't thought about this before this discussion. Do we have any other means currently to get these bundled files reliably checked for security issues?

@mickaelistria
Copy link
Contributor

What are your thoughts on the missed out security checks of transitive npm dependencies (which would be solved by checking in the lock file)?

The checks are happening on npm end on every build when npm install is called AFAIK, aren't they?
So a build is good at the time it was produced. What happens later, if some included 3rd party software is detected as vulnerable is out of project hands. But people who want to verify their binaries can look in the jar, there is a package-info.json and audit it, or even audit the node_modules directly.
What happens next if 3rd party software is vulnerable is that in a future build, the fixed version will be included instead.

@sebthom
Copy link
Member Author

sebthom commented Nov 18, 2025

AFAIK npm install only prints a warning and does not fail builds and I doubt anyone actively monitors the CI logs of release builds for that, so we get no push notification and probably are not aware of these issues.

@mickaelistria
Copy link
Contributor

AFAIK npm install only prints a warning and does not fail builds and I doubt anyone actively monitors the CI logs of release builds for that, so we get no push notification and probably are not aware of these issues.

Ah, my bad, I thought npm did fail or bounce to another version by default when one is marked as CVE'd...
So maybe just adding a step invoking npm audit or reading the log to fail on security error messages would be sufficient?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants