-
Notifications
You must be signed in to change notification settings - Fork 74
ci: don't run license check if dependencies don't change #1968
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Please be careful with it, because some transitive dependencies may change their version significantly between two WWD builds even if our own package.json/pom.xml doesn't change. |
|
I don't think this is possible for package json. That's what the lock file is for. The only case where I see this could happen is by referencing the latest orbit repo in the target file but I would expect that everything in orbit is vetted. Btw, it was an advice from Eclipse CBI to not run the license check if dependencies don't change. |
|
It has happened multiple times with package.json as we don't commit the lock file , nor we want to do that, to prevent extra commits way too often. Note: If lock file is committed and ranged dependency is updated on npm withing the range this was marking the build dirty and thus failing it. |
Checking whether dependencies changed is actually a more complex work than calling CBI dash-analysis (which is more or less just querying a database/bugtracker). It would require, additionally to a local build, fetching the last published snapshot of wildwebdeveloper, extra the package-lock.json out of it and compare the meaningful parts of it with what was just built. |
|
I actually think we should commit the lockfile. I forgot about that. Dependabot notifies about direct dependencies in package.json so even if we commit the lockfile there won't be more PRs coming from it. |
@mickaelistria maybe you want to chime in on here eclipse-dash/dash-licenses#448 ? |
We don't keep the lock file between the builds. This makes it to update the transitive deps on every build |
Yes @akurtakov already pointed that out to me. I believe we should actually keep the lockfiles for reproducible builds. |
I strongly believe that reacting to upstream changes is more important than guaranteeing reproducible builds. |
|
One problem with not having lockfiles is that as soon as you create a release build, the bundled transitive dependencies may have changed underneath you, potentially introducing bugs and inconsistencies. You might have been happily working with a snapshot build locally the whole time, but with different transitive dependencies than the ones used in the release build, so things suddenly break. Especially in the Node.js ecosystem, semver is often not strictly followed despite the usage of semver like looking versions everywhere. In the JavaScript world it is standard practice to commit the lockfile; I do not see why we should not do the same. Not having a lockfile also makes it harder to reproduce issues that users report: we cannot just check out the tag and start the debugger, we'd have to download the released binaries as they are built. Having a lockfile would also help with the Dash rate limit errors. The underlying capacity problem does not seem likely to be solved soon, especially as the Eclipse Foundation currently seems to put the responsibility on individual project teams to workaround it, even though Dash is a mandatory central service. |
|
From my POV, all those problems you mention are either potential ones, and/or less important for the continued success of the project than having every build pick automatically new version of dependencies (which can possible have regressions, but much more probably have bugfixes or other improvements). |
|
The "always latest means fewer bugs" argument does not convince me. New transitive versions can introduce bugs or security problems just as much as they can fix them. With a lockfile we still pick up those updates, but on our terms, as a reviewable change, and we can bisect or roll back the exact dependency change if something goes wrong. Checked-in lockfiles are also a big win for security. Dependabot and GitHub's security scanners use the lockfile to raise security alerts and PRs for vulnerable resolved versions, not just for direct dependencies or loose ranges in And as said before, a stable dependency graph can help us right now with the annoying dash rate limit issue that, for example, made pretty much every build fail last week. |
Well, we could easily debate it, but overall, software -like most immaterial work- mostly sees its quality increasing. Sure, every change is a risk, but every components tries to mitigate this risk, and the amount of prevention techniques constantly grow, and every change exists as its supposed to increase quality.
So far, everyone has been happy without having to review those changes. There was no need for it.
In the current state, when something goes wrong, then the related commit is not applied and we don't publish a new build. Is something being wrong in the dependency chain doesn't cascade to some incorrect build, it "freezes" project development as fixing the dependency issue becomes the priority.
Taking project design decisions based on (transient) limitations of a transitive tool is IMO often leading to misplaced effort. Pressing the Eclipse Foundation to make the infrastructure tools the offer more reliable (ideally by invoking the committer representative to ensure this escalates fast enough) would lead to a more profitable result, for the whole ecosystem. All that said, I have to say that if you commit in ensuring Wild Web Developer gets its dependencies as up-to-date with all its internal components as they are now, I am not against someone else doing more work. I just would personally strongly dealing with such tasks which seem of very low profit for the ecosystem. But if a committer sees a benefit for them and is doing things so that project doesn't suffer, then no-one has real arguments to block them. |
|
What are your thoughts on the missed out security checks of transitive npm dependencies (which would be solved by checking in the lock file)? I actually haven't thought about this before this discussion. Do we have any other means currently to get these bundled files reliably checked for security issues? |
The checks are happening on npm end on every build when |
|
AFAIK npm install only prints a warning and does not fail builds and I doubt anyone actively monitors the CI logs of release builds for that, so we get no push notification and probably are not aware of these issues. |
Ah, my bad, I thought npm did fail or bounce to another version by default when one is marked as CVE'd... |
This avoids unnecessary calls to the ClearlyDefined API which constantly fails with rate limit errors.