-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support GHC 9 (with both Aeson 1 and Aeson 2) #96
Conversation
Hi @thomasjm. Thanks for your PR. I'm waiting for a kubernetes-client member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Everything under the Fortunately, the haskell-http-client code in https://github.com/OpenAPITools/openapi-generator has been updated recently for ghc9 and aeson2 (It just switches to aeson2 instead of attempting to support both) OpenAPITools/openapi-generator#12309 . Firstly, I would just switch everything over to aeson2 and exclude aeson1 to be consistent with the above. Secondly, we need to include in this PR changing the targeted openapi-generator commit and then re-gen the /kubernetes folder with the latest generator code. After that the only manual update we make to The process to regen is:
|
Okay, so if I understand correctly the If we're upstreaming things to |
The
Yes, we could upstream the CPP for switching between Aeson 1 and 2. It is only a little more work because there are more files in the generator and sample code that would need CPP, but nothing too bad ( https://github.com/OpenAPITools/openapi-generator/pull/12309/files ). I'd probably just like to do that myself since i'm more familiar with how to update that project |
@thomasjm i added a commit to a copy of |
/lgtm |
@thomasjm: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@@ -52,9 +54,16 @@ instance Arbitrary Date where | |||
arbitrary = Date <$> arbitrary | |||
shrink (Date xs) = Date <$> shrink xs | |||
|
|||
#if MIN_VERSION_aeson(2,0,0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: could avoid an empty if statement here with !MIN_VERSION_aeson(2,0,0)
/remove-lgtm |
The travis build will have to updated for ghc9 |
Thanks for updating the generator! I built it locally with both Aeson 1 and 2 and it looks good. Trying to update the Travis config now. Have you considered switching to GitHub Actions? In my experience there tends to be less boilerplate with the GitHub Haskell action, so it's easier to update. For example: https://github.com/codedownio/sandwich/blob/master/.github/workflows/sandwich.yml |
Actually I think we may have already moved away from travis to prow. It looks like this is the failing job for the kubernetes prow test runner: https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/kubernetes-clients-haskell-unit-tests This may be file which is controlling the prow job: https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-client/haskell/client-haskell-presubmits.yaml it looks like a much simpler configuration so we probably just need to udpate the docker image for ghc9 for the default build configuration in the |
Okay, like this? kubernetes/test-infra#26444 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
@jonschoning maybe it's time to give up on kubernetes/test-infra#26444 ? Is it possible to merge this anyway? |
This PR works great, it would be nice to have it merged. Thanks @thomasjm ! |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@thomasjm: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@thomasjm , here is a GitHub Action based on this branch |
@peterbecich switching to GitHub Actions sounds great to me. In fact I suggested it earlier in this thread. But I was told that Kubernetes projects use Prow, and then we got mired in the attempt to do that using the Kubernetes test infra stuff. FWIW I think the Prow approach is inferior anyway, because it uses a single Dockerfile with a baked-in version of GHC. Whereas GitHub Actions allows us to test all supported GHC versions. What do you think @jonschoning? We could merge this PR, and then separately merge one containing the GH Actions config. Then we'll have everything working and CI running in this repo, with no dependency on |
@thomasjm I agree we should try switching to GitHub Actions. I think some of the other projects may be doing something similar now as well. |
Okay, what do you think about the plan of merging this now and then doing a separate PR to get CI working? And once that's all green we can get a release on Hackage. Or would you rather do those in a different order? Also, do we need another approver from the |
IMO we should fix compilation for GHC 9.2, 9.4, etc. in subsequent PRs |
@jonschoning friendly ping, I think finishing this off depends on you :) |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jonschoning, thomasjm The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
I started this PR with the goal of supporting
aeson
2, which is used on the Stackage resolvers starting from the first week of February 2022. As it turned out, the changes foraeson
2 were pretty minimal, but the changes to supporthoauth2
were more extensive, sincehoauth2
went through a series of minor breaking changes at2.0
,2.2
, and2.3
. I've added separate CPP sections to support each of these.So, this version should support both Aeson 1 and 2, as well as all the
hoauth2
versions. I added a couplestack.yaml
files to test these configurations. (I'd suggest setting up a Github Actions pipeline to build all of these? I could open a PR with that if you'd like.)One thing to call out is that the
oauth2RedirectUri
option became required inhoauth2-2.3.0
. I don't really understand why this is required for the OIDC authentication flow used by this library, since it wasn't provided in the past. But I added it to theOIDCAuth
type, again protected by CPP.I broke things up by commit to make it easier to review. Thanks!