-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with unstable go-sqlite3 version - 2.0.3 #975
Comments
What version of Go are you using? Also, what is the output of |
I have the same problem with @Shriram-RP, but everything is going well after changing the Before changing the foo@bar golang-dir % go mod tidy
go: github.com/mattn/[email protected]+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.1: unknown revision v2.0.1
go: downloading github.com/mattn/go-sqlite3 v2.0.1+incompatible
go: github.com/mattn/[email protected]+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.1: unknown revision v2.0.1 Changing the foo@bar golang-dir % export GOPROXY=https://proxy.golang.org After changing the foo@bar golang-dir % go mod tidy
go: downloading github.com/mattn/go-sqlite3 v2.0.1+incompatible |
Version - 1.17 go env output :- GO111MODULE="" |
One way to fix this is adding the below code to the go.mod file |
any fix ? |
This is not an issue of go-sqlite3. v2 was an accident as written in README.md
|
@mattn I wonder if this would be a reason to use a retraction and just retag v2: https://golang.org/ref/mod#go-mod-file-retract The impact of this is a bit broad. Large projects seem to have been impacted by this as well as small random odds and ends. Notice that Can't imagine what else this might have landed in. A retraction seems like it would prevent tons of people from having to stuff
|
If once you clean module cache on your environment, vertion 1.14 shold be used in next time. |
That's not the case. People will be stumbling onto this for a while. Commits like this and this are actively referenced in projects today. There are 39,000 files that come up with I search to get an idea of how wide spread this is. |
@protosam From what I understand, retracting the v2 tags won't resolve any of the existing issues. It just stops the issue from continuing to grow. If I have a dependency on some other module that in turn has a dependency on one of the broken v2 tags of this library, then go mod will still attempt to download that version. And unless you are using proxy.golang.org (or similar), it will fail. The only time retraction matters is when someone is explicitly upgrading their dependencies. In other words, even if a retraction is published, someone would still have to go to every one of the broken dependents and fix their go.mod to reference a valid release. Then all affected developers have to go and update to the newer versions of these modules. @mattn I'm guessing the reason this just become an issue recently is because the v2 tags were deleted from this repo. Is that correct? If so, would you be able to re-create them for the same commits? That way people using "direct" at least won't get failed builds. (This is orthogonal to retracting the v2 releases.) |
Anyone know which commit should i tag ? |
@mattn I'm looking into this. I expect to find it in a go.sum file somewhere. |
Google had them cached in sum.golang.org and also they're still listed at pkg.go.dev. v2.0.0+incompatible I'm actually a bit confused on the serialization. Saying what I got in case someone else is able to understand deserializing this faster than I can. |
Should all tags be restored? |
I restored older v2.0.X tags. How goes good? |
So you (who used v2.0.X) should modify go.mod to switch v1.4.X.
|
The thousands of forks and archive.org are pretty useful, these commits look good to me and below I've tested them. They work as I expect (they validated against sum.golang.org). I did some additional testing to see how the retraction will work. I don't think it works against legacy versioning. Ideally if it worked, we would be capable of getting a message out to the end users so they know they're using bad versions and later these versions can be gracefully claimed for reuse. No idea where to go with that discovery though. =/ So Tags Started with an empty project and tested getting a module that has a lower dependent that requires
Below I tested against
|
@mattn I'd probably pin this issue while it's affecting people otherwise they are going to open new ones as they rarely look at slightly older issues. I'd probably reach out to the Go team at this point to see if they have any ideas to stop the spread and fix the problem (eg. remove these entries from sumdb? Dunno if that's possible/good idea) Retracting would probably mean that you can't use these tags in the future. |
This issue is already reported to Go team. golang/go#35732 |
@mattn as far as I can tell that issue was closed with working as intended. I was rather suggesting to contact them about this specific case, hoping that they could remove the offending entries from the sumdb/proxy. |
Thank you! This fixed my problem. When I ran |
Version 5.22 introduced a new option to /etc/containers/policy.json called keyPaths, see containers/image#1609 EL9 immediately took advantage of this new feature and started using it, see https://gitlab.com/redhat/centos-stream/rpms/containers-common/-/commit/04645c4a84442da3324eea8f6538a5768e69919a This quickly became an issue in our code: The go library (containers/image) parses the configuration file very strictly and refuses to create a client when policy.json with an unknown key is present on the filesystem. As we used 5.21.1 that doesn't know the new key, our unit tests started to failing when containers-common was present. Reproducer: podman run --pull=always --rm -it centos:stream9 dnf install -y dnf-plugins-core dnf config-manager --set-enabled crb dnf install -y gpgme-devel libassuan-devel krb5-devel golang git-core git clone https://github.com/osbuild/osbuild-composer cd osbuild-composer # install the new containers-common and run the test dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/44.el9/x86_64/containers-common-1-44.el9.x86_64.rpm go test -count 1 ./... # this returns: --- FAIL: TestClientResolve (0.00s) client_test.go:31: Error Trace: client_test.go:31 Error: Received unexpected error: Unknown key "keyPaths" invalid policy in "/etc/containers/policy.json" github.com/containers/image/v5/signature.NewPolicyFromFile /osbuild-composer/vendor/github.com/containers/image/v5/signature/policy_config.go:88 github.com/osbuild/osbuild-composer/internal/container.NewClient /osbuild-composer/internal/container/client.go:123 github.com/osbuild/osbuild-composer/internal/container_test.TestClientResolve /osbuild-composer/internal/container/client_test.go:29 testing.tRunner /usr/lib/golang/src/testing/testing.go:1439 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1571 Test: TestClientResolve client_test.go:32: Error Trace: client_test.go:32 Error: Expected value not to be nil. Test: TestClientResolve When run with an older containers-common, it succeeds: dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/40.el9/x86_64/containers-common-1-40.el9.x86_64.rpm go test -count 1 ./... PASS To sum it up, I had to upgrade github.com/containers/image/v5 to v5.22.0. Unfortunately, this wasn't so simple, see go get github.com/containers/image/v5@latest go: github.com/containers/image/[email protected] requires github.com/letsencrypt/[email protected] requires github.com/honeycombio/[email protected] requires github.com/gobuffalo/pop/[email protected] requires github.com/mattn/[email protected]+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.3: unknown revision v2.0.3 It turns out that github.com/mattn/[email protected]+incompatible has been recently retracted mattn/go-sqlite3#998 and this broke a ton of packages depending on it. I was able to fix it by adding exclude github.com/mattn/go-sqlite3 v2.0.3+incompatible to our go.mod, see mattn/go-sqlite3#975 (comment) After adding it, go get github.com/containers/image/v5@latest succeeded and tools/prepare-source.sh took care of the rest. Signed-off-by: Ondřej Budai <[email protected]>
Version 5.22 introduced a new option to /etc/containers/policy.json called keyPaths, see containers/image#1609 EL9 immediately took advantage of this new feature and started using it, see https://gitlab.com/redhat/centos-stream/rpms/containers-common/-/commit/04645c4a84442da3324eea8f6538a5768e69919a This quickly became an issue in our code: The go library (containers/image) parses the configuration file very strictly and refuses to create a client when policy.json with an unknown key is present on the filesystem. As we used 5.21.1 that doesn't know the new key, our unit tests started to failing when containers-common was present. Reproducer: podman run --pull=always --rm -it centos:stream9 dnf install -y dnf-plugins-core dnf config-manager --set-enabled crb dnf install -y gpgme-devel libassuan-devel krb5-devel golang git-core git clone https://github.com/osbuild/osbuild-composer cd osbuild-composer # install the new containers-common and run the test dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/44.el9/x86_64/containers-common-1-44.el9.x86_64.rpm go test -count 1 ./... # this returns: --- FAIL: TestClientResolve (0.00s) client_test.go:31: Error Trace: client_test.go:31 Error: Received unexpected error: Unknown key "keyPaths" invalid policy in "/etc/containers/policy.json" github.com/containers/image/v5/signature.NewPolicyFromFile /osbuild-composer/vendor/github.com/containers/image/v5/signature/policy_config.go:88 github.com/osbuild/osbuild-composer/internal/container.NewClient /osbuild-composer/internal/container/client.go:123 github.com/osbuild/osbuild-composer/internal/container_test.TestClientResolve /osbuild-composer/internal/container/client_test.go:29 testing.tRunner /usr/lib/golang/src/testing/testing.go:1439 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1571 Test: TestClientResolve client_test.go:32: Error Trace: client_test.go:32 Error: Expected value not to be nil. Test: TestClientResolve When run with an older containers-common, it succeeds: dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/40.el9/x86_64/containers-common-1-40.el9.x86_64.rpm go test -count 1 ./... PASS To sum it up, I had to upgrade github.com/containers/image/v5 to v5.22.0. Unfortunately, this wasn't so simple, see go get github.com/containers/image/v5@latest go: github.com/containers/image/[email protected] requires github.com/letsencrypt/[email protected] requires github.com/honeycombio/[email protected] requires github.com/gobuffalo/pop/[email protected] requires github.com/mattn/[email protected]+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.3: unknown revision v2.0.3 It turns out that github.com/mattn/[email protected]+incompatible has been recently retracted mattn/go-sqlite3#998 and this broke a ton of packages depending on it. I was able to fix it by adding exclude github.com/mattn/go-sqlite3 v2.0.3+incompatible to our go.mod, see mattn/go-sqlite3#975 (comment) After adding it, go get github.com/containers/image/v5@latest succeeded and tools/prepare-source.sh took care of the rest. Signed-off-by: Ondřej Budai <[email protected]>
Version 5.22 introduced a new option to /etc/containers/policy.json called keyPaths, see containers/image#1609 EL9 immediately took advantage of this new feature and started using it, see https://gitlab.com/redhat/centos-stream/rpms/containers-common/-/commit/04645c4a84442da3324eea8f6538a5768e69919a This quickly became an issue in our code: The go library (containers/image) parses the configuration file very strictly and refuses to create a client when policy.json with an unknown key is present on the filesystem. As we used 5.21.1 that doesn't know the new key, our unit tests started to failing when containers-common was present. Reproducer: podman run --pull=always --rm -it centos:stream9 dnf install -y dnf-plugins-core dnf config-manager --set-enabled crb dnf install -y gpgme-devel libassuan-devel krb5-devel golang git-core git clone https://github.com/osbuild/osbuild-composer cd osbuild-composer # install the new containers-common and run the test dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/44.el9/x86_64/containers-common-1-44.el9.x86_64.rpm go test -count 1 ./... # this returns: --- FAIL: TestClientResolve (0.00s) client_test.go:31: Error Trace: client_test.go:31 Error: Received unexpected error: Unknown key "keyPaths" invalid policy in "/etc/containers/policy.json" github.com/containers/image/v5/signature.NewPolicyFromFile /osbuild-composer/vendor/github.com/containers/image/v5/signature/policy_config.go:88 github.com/osbuild/osbuild-composer/internal/container.NewClient /osbuild-composer/internal/container/client.go:123 github.com/osbuild/osbuild-composer/internal/container_test.TestClientResolve /osbuild-composer/internal/container/client_test.go:29 testing.tRunner /usr/lib/golang/src/testing/testing.go:1439 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1571 Test: TestClientResolve client_test.go:32: Error Trace: client_test.go:32 Error: Expected value not to be nil. Test: TestClientResolve When run with an older containers-common, it succeeds: dnf install -y https://kojihub.stream.centos.org/kojifiles/packages/containers-common/1/40.el9/x86_64/containers-common-1-40.el9.x86_64.rpm go test -count 1 ./... PASS To sum it up, I had to upgrade github.com/containers/image/v5 to v5.22.0. Unfortunately, this wasn't so simple, see go get github.com/containers/image/v5@latest go: github.com/containers/image/[email protected] requires github.com/letsencrypt/[email protected] requires github.com/honeycombio/[email protected] requires github.com/gobuffalo/pop/[email protected] requires github.com/mattn/[email protected]+incompatible: reading github.com/mattn/go-sqlite3/go.mod at revision v2.0.3: unknown revision v2.0.3 It turns out that github.com/mattn/[email protected]+incompatible has been recently retracted mattn/go-sqlite3#998 and this broke a ton of packages depending on it. I was able to fix it by adding exclude github.com/mattn/go-sqlite3 v2.0.3+incompatible to our go.mod, see mattn/go-sqlite3#975 (comment) After adding it, go get github.com/containers/image/v5@latest succeeded and tools/prepare-source.sh took care of the rest. Signed-off-by: Ondřej Budai <[email protected]>
The tags are disappeared again :( |
The 2.0.X was retracted. https://pkg.go.dev/github.com/mattn/go-sqlite3?tab=versions You may have already had go-sqlite3 version 2.0.X. Please update manually to 1.4.X. |
@mattn Even though you retracted those versions, those tags should never be deleted, so that people can continue to download their dependencies, including for older versions of their projects. |
@rittneje Yes. My understanding is that once people manually revert to 1.4.X, they will not be upgraded thereafter since 2.X was retracted. I believe this is a problem that cannot be recovered automatically. |
My point is that deleting those v2 tags breaks people's builds so we should not do that. We can only strongly discourage their use going forward (which the retraction helps). |
I have no plans to release v2 any more. |
But those tags were already released. So there are other libraries and projects that reference them. By deleting them, those versions cannot be built properly anymore (without using a Go module proxy), which was the original stated issue. We can tell people not to use v2 in new versions of their code anymore, but we should not cause the older/existing versions to break. |
v2 was an accidental release because the Go mod incorrectly identified the branch name as a tag. #998 is a PR to add retract as a way to fix these problems. |
Retract doesn't really "fix" the problem per se. In fact, the problem is not really fixable in the way you are hoping. Once a version has been released, it should not be deleted. The retraction fixes some latent issues with fetching/upgrading via Typically people are using the default Go module proxy at proxy.golang.org. By design, this proxy does not allow versions to be deleted, so that you have eternal build reproducibility. This is an important feature for most people, as they expect all previous versions of their code to always work. However, in the event that you are not using a proxy ( My understanding is there were two issues in the history of this repo. The first is the existence of a branch named |
Yes, so I added note in README. https://github.com/mattn/go-sqlite3#go-sqlite3 I have not released v2 in other projects because I do not break compatibility. I also don't want to make it a hassle for the user. Fortunately, as far as I can see, many people still have v1. Because the bump to v2 is not automatic. So I decided to save the v1 users instead of the v2 users. Do you still think we should release v2.1.X or v3? BTW, go get with fresh go.mod always get 1.4.X. |
I think you should continue releasing v1.X.Y, as you have been. (In the event you do decide to make breaking changes, going to v3 is probably best to have a clean break from v2.) However, the v2 tags that were already published before should continue to exist indefinitely, you just won't be releasing any new v2.X.Y tags.
Yes this is due to fixes in |
I figured that leaving the v2 tag could be confusing to new users.
Yes, it's same with my understanding. Thanks. BTW, I don't touch v2 tags recently. |
Indeed, I can update my code, but there are tons of 3rd party libs which require go-sqlite v2.x.x |
I see that |
I got the same problem but after I did |
@mattn Deleting the v2.0.3 broke code that I hadn't needed to touch for years. Now when I try to use v1.14.9 I get this error for that code that worked fine before:
None of the logic I wrote works anymore and it used to. I just get empty values when I query now. How do I fix this? I have |
@drgrib Are you getting an error, or an empty result set? How specifically are you compiling and executing your code? Have you tried with the latest release (1.14.16)? Also, what does #855 is not fixable as the Go toolchain does not support that mode of operation. You must compile with |
@rittneje I guess I'm technically not getting an error, just that message when I run, along with empty result sets. My bad too. I had |
Hi,
We at the platform team of razorpay internally use bulk-insert which internally uses
github.com/jinzhu/gorm v1.9.12
and creates an indirect dependency on github.com/mattn/go-sqlite3 v2.0.3.Reference :- https://github.com/t-tiger/gorm-bulk-insert/blob/master/go.mod#L14
Now, this setup was working fine for us till last week and suddenly we had a failure in our docker build for github actions citing
github.com/mattn/[email protected]+incompatible: unknown revision v2.0.3
We also saw one of your issues where you mentioned versions 2.0+ were unstable - issue
Did you stop supporting 2.0.3 version suddenly in the last 7-10 days. We were wondering what caused the breakage of the build from our side. It would be helpful if you could throw some light on this issue.
Thanks,
Shriram
The text was updated successfully, but these errors were encountered: