maintainers/scripts/haskell/exclude.nu: init#441204
maintainers/scripts/haskell/exclude.nu: init#441204wolfgangwalther wants to merge 4 commits intoNixOS:haskell-updatesfrom
Conversation
69cee18 to
a3805ce
Compare
a3805ce to
178b04b
Compare
178b04b to
a147664
Compare
69bc408 to
afbf418
Compare
There was a problem hiding this comment.
Why not do a more extreme version of this
lib.optionalAttrs config.allowAliases (lib.genAttrs …)CI would flag issues with this that would otherwise stay hidden and it should be a bit faster.
There was a problem hiding this comment.
This also introduces the weird property that stuff will fail to evaluate with aliases, but not without them. I think it is usually not worth it to keep around package expressions for packages that are missing dependencies.
There was a problem hiding this comment.
(Not directly related: consider let foo = builtins.fromJSON (builtins.readFile …).foo; in … outside of any lambda, which will benefit from the import cache.)
There was a problem hiding this comment.
Why not do a more extreme version of this
lib.optionalAttrs config.allowAliases (lib.genAttrs …)CI would flag issues with this that would otherwise stay hidden and it should be a bit faster.
I assume you mean it literally like this, so not creating these attributes without aliases. That won't work, because...
This also introduces the weird property that stuff will fail to evaluate with aliases, but not without them. I think it is usually not worth it to keep around package expressions for packages that are missing dependencies.
The problem is that some packages are missing these dependencies but are still building fine. They might miss these dependencies for the benchmark, which is not enabled by default. If we remove these packages, that will remove a ton of actually building packages - and I think some very central packages with a lot of reverse dependencies are that way.
The problem is that "missing argument" throws way too early. We really need something like #442066 to get both of:
- Not failing eval for the
haskellPacakges.exposed attributes, because they are not evaluated. - But failing eval when any of these dependencies are used from other packages.
Currently we have to go back to null, otherwise eval will just fail on hitting them in haskellPackages..
There was a problem hiding this comment.
Hm what if we evaluate the package descriptions with benchmarks disabled in hackage2nix (but not cabal2nix)? I would be surprised if many people actually build the benchmarks from Nixpkgs.
There was a problem hiding this comment.
We could try, but I'd imagine that this only shifts the problem. There could be other cases where it's not the benchmark, but some of the tests. When we explicitly disable these relevant parts in Nixpkgs, then we don't need their dependencies either.
Doing it this way means we'd be in some kind of a deadlock situation:
- The script would remove the dependency because it's broken and outdated.
- The script would then remove the package itself.
- We try to add the package back and the only way to do it might be to do
dontCheck. - The script can't tell and would remove the package the next time it's run.
So I really think we should not change the package expressions at all and instead pass throw's or null's - and improve on that in the future.
I mean... we could theoretically just take this little piece of the alias PR already and use that independently of config.allowAliases:
Aka, we'd create a tiny fake "derivation" (so much fake that it doesn't even have type = derivation). Thus would just prevent the Eval errors in CI, because it's essentially an attrset without recurseForDerivations. But when this is used as a dependency for any hackage package, it will trigger the error.
There was a problem hiding this comment.
(Not directly related: consider
let foo = builtins.fromJSON (builtins.readFile …).foo; in …outside of any lambda, which will benefit from the import cache.)
Done.
Aka, we'd create a tiny fake "derivation" (so much fake that it doesn't even have
type = derivation). Thus would just prevent the Eval errors in CI, because it's essentially an attrset withoutrecurseForDerivations. But when this is used as a dependency for any hackage package, it will trigger the error.
Did that. This will throw the same error with and without allowAliases, but still pass CI.
|
I think checking whether a broken package is deprecated is also a good shout, though a lot of deprecated packages are still necessary to build other things. |
|
Can't we make exclusion a constraint? Having a script to clean up useless entries in excluded-packages.json sounds better than having to run a script to re-add packages periodically. With this you could almost be very aggressive and remove most of the broken packages, though it would have the flaw that a Hackage metadata revision wouldn't resurrect a package. |
The way I understand "deprecation" to work on hackage, that is specific to a certain version. Although I remember having seen something else as well. I haven't found the right api endpoints to catch this stuff, yet. IIUC, the version specific deprecation is exposed as "preferred versions" or so? But maybe that's something else again. As long as we remove the whole package without considering versions, this doesn't apply, yet. In any case, I think deprecated packages will by definition not be updated anymore, so will end up in any time-related cutoff eventually.
That sounds like a good idea, indeed - with the caveat(s) below.
True. And also, it would still be a problem with the transitive broken cases, that we are removing as well right now. Example:
The exclude script will now exclude both of them. Now, bar is updated. We re-introduce it by either method (version constraints or periodic checks). It builds fine. But we don't know whether foo builds now or not - it is still excluded. I guess we would have to also include all direct reverse dependencies of Ultimately... a script to run periodically to just include all packages temporarily and start building might be the simplest solution to this problem. It would catch all cases. Aka, my initial idea of rewriting |
This script populates `excluded.yaml`, which allows hackage2nix to skip generating expressions for many old packages. The script uses a very simple logic right now. Any packages that is both broken and has not been updated for 5 years on hackage, is filtered out entirely. An exception is made for packages which are listed in the current Stackage snapshot as well - these are kept, otherwise hackage2nix will complain, because it can't deal with the constraints on these packages. This does not look at reverse dependencies, so will very likely in changes to hackage-packages.nix, where some packages that are still left in the set are passed `null` for their removed dependencies. I'd argue that this is not a problem per se: These packages were transitively broken before and would now fail to build because of *missing* dependencies if anyone tried to resurrect them. As part of fixing these packages, the contributor would have to fix that dependency anyway, they'd now need to remove it from the exclusion list to do so. Not considering reverse dependencies makes the logic considerably simpler. Another thing this does not consider is when an already excluded package is resurrected on hackage, i.e. gets a new upload. This is left out for now on purpose - should we hit these cases more often, we can still write a script to automate removal from this list as well.
Created by maintainers/scripts/haskell/exclude.nu.
These packages were previously marked as transitively broken, before the exclusion of old packages temporarily marked them unbroken. I only tested the first 10 or so, but they all failed to build. Chances are high, that *all* of them will fail to build, because some of their dependencies were removed. In the odd case that one of these would succeed, this would show up in the next run of unbreak.nu.
afbf418 to
4be76c1
Compare
This script populates
excluded.yaml, which allows hackage2nix to skip generating expressions for many old packages.The script uses a very simple logic right now. Any packages that is both broken and has not been updated for 5 years on hackage, is filtered out entirely. An exception is made for packages which are listed in the current Stackage snapshot as well - these are kept, otherwise hackage2nix will complain, because it can't deal with the constraints on these packages.
This does not look at reverse dependencies, so will very likely in changes to hackage-packages.nix, where some packages that are still left in the set are passed
nullfor their removed dependencies*. I'd argue that this is not a problem per se: These packages were transitively broken before and would now fail to build because of missing dependencies if anyone tried to resurrect them. As part of fixing these packages, the contributor would have to fix that dependency anyway, they'd now need to remove it from the exclusion list to do so.Not considering reverse dependencies makes the logic considerably simpler.
Another thing this does not consider is when an already excluded package is resurrected on hackage, i.e. gets a new upload. This is left out for now on purpose - should we hit these cases more often, we can still write a script to automate removal from this list as well.
Only works with cabal2nix-unstable pointing at NixOS/cabal2nix#667.
* The affected packages were marked broken again in a second commit.
Some stats:
cc @emilazy for the diffstat - and the fun!
Things done
Add a 👍 reaction to pull requests you find important.