-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No matching certificate to load: decoding certificate metadata: invalid character '}' #6481
Comments
Hey Skifree, thanks for reporting this. 😉 That's weird! @elee1766 Is this possibly due to a race condition you were referring to in Slack? I don't think I've seen a malformed JSON file before, only an empty one. |
this looks like one file is written on top of another without clearing the first once. if read is called after fsync is called after truncate and after write but before flush/close, this could happen I think? the new cert would have to be one character smaller than the old cert. (is this possible? are any fields variable length?). but it's reading the old file size, so maybe something else happened at write time. regardless, something happened during file write, maybe also during a read. but yes, my guess is the temp file approach fixes this @mholt I doubt go's json encoder would do this. |
@elee1766 Thanks for your input! Yeah, I think the _uniqueIdentifier may possibly vary by 1-2 chars. But I'm not 100% sure on that. It's related to the ASN.1 encoding of the cert's serial number. But anyway, it sounds like we have a fix in the pipeline for this already. :) |
@solracsf I don't suppose you happen to have the previous JSON file for this cert? |
also, did anything weird happen to the server ? like a reboot or process restart or config reload, when it started happening? also, are you running only one instance of caddy? there's also a third variable - maybe two certs could have been being written at once somehow. iirc the lock in filestorage is not perfect. @mholt isn't this possible? (for the imperfect file based lock to cause parallel write?) |
I have 2 caddy instances, 2 distinct servers. Caddy storage is a JuiceFS mount. Problem started when servers rebooted (manually, one after another, to apply kernel updates). This has been done before many times without any issues.
|
@mholt yes, at this point i'm convinced the tempfile patch would make this not happen. it looks like two caddy instances were writing to the same file at the same time. @solracsf here is the relevant PR, if you are curious caddyserver/certmagic#300 from what i see - juicefs rename is atomic? so this PR should work for you. btw, certmagic storage doesn't use advisory locks yet. i opened an issue about this caddyserver/certmagic#295 . I personally dont have a use case for this, since i do not use the filesystem to share certs across multiple instances, so i didn't end up finishing this. if you have the time it would be nice to add this as a feature. matt said he is happy to accept a PR to correctly use filesystem level locks instead of the fake-lock that filesystem storage currently uses. I would be happy to review as well. basically just need to check on boot if the filesystem supports a locking method, and use it, otherwise fallback to the existing strategy that said, i would seriously recommend just using redis with https://github.com/pberkel/caddy-storage-redis or https://github.com/ss098/certmagic-s3 with juicefs s3 gateway, or writing your own cert-magic plugin for s3 for any production settings. |
I understand, but we prefer not adding plugins to caddy. In our env, JuiceFS handles terabytes of data without any issue since 2y now, we prefere keep it as our filesystem abstraction layer and rely on Caddy certmagic filesystem. We'll keep an aye on caddyserver/certmagic#300 and provide feedback. 👍 |
@solracsf You can try it now, if you build Caddy with |
Will be hard to say if it fixes anything soon, because caddy was installed and runing for 2y with this setup without any issues 🤔 |
If you are running multiple instances of Caddy with shared storage, you may encounter a locking issue introduced in certmagic v0.21. |
Yeah, sorry; I'm behind on releases. Trying to catch up on things! |
hey @mholt, i'm facing the exact same issue. I recently saw some JSON files containing an additional
Do we have any fix for this issue? I'm already using Caddy v2.8.4 and running multiple instances of Caddy with a shared NFS volume. Would I be able to solve this issue by moving to Google Cloud Storage/S3? |
The workaround is to wipe out Caddy's stroage and restart Caddy. It'll reissue all your certificates, but that should be fine assuming you have only a handful. Or, you can dive into Caddy's storage and look at all the In v2.9.0 it should be fixed by caddyserver/certmagic#300 |
Thank you @francislavoie! I'll upgrade to v2.9.0 and check all the JSON files as i have > 1k certs. |
@francislavoie I see that v2.9.0 is still in beta, could I build v2.8.4 with certmagic@mater instead? i'm already building Caddy binary using xcaddy |
Upgrading won't fix the broken storage, it'll just fix further breakages of the storage. You could use the beta, it's pretty stable, just missing a few things we wanted to fit into the final release. |
I have faced this issue too in this morning after upgrading Caddy to v2.8.4 and OS got rebooted. I solved this by removing the site's entire certificate cache dir (such as |
Ran into this today also. Deleted the cert from storage and resrted Caddy, and it issued a new cert successfully. Waiting patiently for 2.9.0! |
@bryanus you can use 2.9.0-beta.3 right now |
Just found another cert in my system that had this issue. Could someone remind me on how to perform an upgrade to the 2.9.0-beta.3? And do I need to wipe all my certs first to have them reissued? TIA. |
you do not need to wipe all of your certs. I assume that on error, the certs will refetch? maybe @mholt can confirm though . it may be good to go through the certs and delete any of them that are malformed json.
depends on how you are running, but you just need to switch to running the new binary / install the new package from https://github.com/caddyserver/caddy/releases/tag/v2.9.0-beta.3 and restart. |
I wrote this script to find all malformed JSON files
|
Caddy will try again later if there's an error.
That will help speed things up, I'm pretty sure. Thanks for helping @elee1766 ! And thanks for posting your script @aa-shalin |
Same issue here. I'm running a Caddy instance on a single machine with certs stored on a ext4 partition. Here is a one line version of @aa-shalin script that fixes this issue as well:
@mholt Do you have any estimates for the release of version 2.9.0? |
Shortly after new years. One more beta before then I think. |
This issue likely killed a production server of one of our app's main services. We have about 150 domains/certs, most of them not used very often. But the We're running a single instance of Caddy 2.8.4 in an AWS EC2 instance with an 8GB AWS gp3 volume. For logs, these two lines start and continue repeating:
Then the first log of the brace error occurs and repeats every few seconds:
Does the Overall, we've been using Caddy for two years with no problems. This particular instance has been running since July 2024 when we upgraded to 2.8.4 with no issues, reboots, or restarts. Kind of a wild bug to still be floating around in a current release. Fixing the bad certs fixed the issue. Also anxiously awaiting a new version. But hopefully the fix doesn't assume multiple Caddy instances as the problem. |
the fix doesn't assume multiple caddy instances as a problem, the current beta (and/or next version if you dont wish to use the beta) should resolve issues for both replicated and single-instance deploys. We have other users who have reported the same bug on single-instance deploys as well. |
@evolross Those log messages are not related AFAICT (but it's hard since they seem truncated). The error reported here is caused, as far as we know, by an incomplete write, often due to power loss, process termination, out of disk space, and other external factors. I have even seen failed writes where the OS did not return an error, so there's no way Caddy could know. @elee1766 's patch is helpful because it writes the contents to a separate file, then renames only once the entire contents have been written, rather than opening the file in truncation mode (effectively deleting the contents) and hoping it writes successfully. I'll be releasing one more beta (hopefully just one) before the new year, and then 2.9 after the new year. |
Can confirm 2.9.0 fixed it 👍 |
Caddy v2.8.4 started printing an error for one domain:
Removing the extra
}
at the bottom of the file/caddy/certificates/acme-v02.api.letsencrypt.org-directory/server.example.com/server.example.com.json
fixes the error.The text was updated successfully, but these errors were encountered: