Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linux Apt Key for nvidia.github.io/libnvidia-container is Failing Due to GitHub CLI Apt Key Update #688

Closed
iPrOmar opened this issue Sep 9, 2024 · 6 comments

Comments

@iPrOmar
Copy link

iPrOmar commented Sep 9, 2024

When running $ sudo apt-get update -y in Ubuntu 20.04.6 LTS, the following errors are seen -

Get:2 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64  InRelease [1484 B]
...
yadda yadda yadda
...
Get:6 https://cli.github.com/packages focal InRelease [3915 B]
...
yadda yadda yadda
...
Err:6 https://cli.github.com/packages focal InRelease
  The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
Fetched 1484 B in 2s (838 B/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages focal InRelease: The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
W: Failed to fetch https://cli.github.com/packages/dists/focal/InRelease  The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
W: Some index files failed to download. They have been ignored, or old ones used instead.

Upon Googling around the GitHub CLI error it appears that GitHub have covered their issue...

This did not fix the Apt errors seen above.

The following part of the original description in yellow was wrong... the issue was due to an Apt repository buried in the cloud-init image for our VMs... It had nothing to do with nvidia... Upon re-building the VM with the latest cloud-init image, the problem was resolved...

However upon closer inspection, it can be seen that this is in fact nvidia runtime container calling on GitHub CLI and if I try to update the nvidia github apt key -
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - </mark>
Nothing is fixed... :-(

Upon further investigation doing a verbose curl of the nvidia github GPG Key, it can be seen that in fact the TLS Handshake has expired...

$ curl https://nvidia.github.io/nvidia-docker/gpgkey -v
*   Trying 185.199.111.153:443...
...
yadda yadda yadda
...
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< server: GitHub.com
< content-type: application/octet-stream
< permissions-policy: interest-cohort=()
< x-origin-cache: HIT
< last-modified: Tue, 10 Jan 2023 14:15:05 GMT
< access-control-allow-origin: *
< etag: "63bd72e9-c7b"
< expires: Mon, 09 Sep 2024 09:27:33 GMT
< cache-control: max-age=600
< x-proxy-cache: MISS
< x-github-request-id: 5525:7F960:58BA6BC:5A508A7:66DEBD2C
< accept-ranges: bytes
< date: Mon, 09 Sep 2024 15:22:40 GMT
< via: 1.1 varnish
< age: 34
< x-served-by: cache-ams2100137-AMS
< x-cache: HIT
< x-cache-hits: 1
< x-timer: S1725895361.769972,VS0,VE1
< vary: Accept-Encoding
< x-fastly-request-id: 2fb675e40fde53a48c7cfd105d0312e9f3bea7ee
< content-length: 3195

Do you think this stuff might get addressed soon as it will start to impact our ability to keep our Linux VM Estate up to date?

Thanks x

@williammartin
Copy link

Hi @iPrOmar, many apologies for the inconvenience from the GitHub CLI team. I'm not sure exactly what you mean by:

However upon closer inspection, it can be seen that this is in fact nvidia runtime container calling on GitHub CLI

It looks to me like you need to grab the new keyring from us at GitHub as per the apt instructions. Did that not work? Sorry if you already tried this and I'm just making things confusing, I'm just trying to be proactive about problems I see relating to our key expiry!

@iPrOmar
Copy link
Author

iPrOmar commented Sep 10, 2024

Hi @iPrOmar, many apologies for the inconvenience from the GitHub CLI team. I'm not sure exactly what you mean by:

However upon closer inspection, it can be seen that this is in fact nvidia runtime container calling on GitHub CLI

It looks to me like you need to grab the new keyring from us at GitHub as per the apt instructions. Did that not work? Sorry if you already tried this and I'm just making things confusing, I'm just trying to be proactive about problems I see relating to our key expiry!

Hi @williammartin
Sorry I should've made my original post clearer... I have already tried GitHub's fix... and it has not fixed the Apt errors that I'm seeing...

I'll update the original post so it is clearer...

@iPrOmar iPrOmar closed this as not planned Won't fix, can't repro, duplicate, stale Sep 27, 2024
@iPrOmar
Copy link
Author

iPrOmar commented Sep 27, 2024

Sorry but this issue was due to an Apt repository buried in the cloud-init image for our VMs... It had nothing to do with nvidia...

@BarteJK99
Copy link

When running $ sudo apt-get update -y in Ubuntu 20.04.6 LTS, the following errors are seen -

Get:2 https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/amd64  InRelease [1484 B]
...
yadda yadda yadda
...
Get:6 https://cli.github.com/packages focal InRelease [3915 B]
...
yadda yadda yadda
...
Err:6 https://cli.github.com/packages focal InRelease
  The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
Fetched 1484 B in 2s (838 B/s)
Reading package lists... Done
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages focal InRelease: The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
W: Failed to fetch https://cli.github.com/packages/dists/focal/InRelease  The following signatures were invalid: EXPKEYSIG 23F3D4EA75716059 GitHub CLI <[email protected]>
W: Some index files failed to download. They have been ignored, or old ones used instead.

Upon Googling around the GitHub CLI error it appears that GitHub have covered their issue...

This did not fix the Apt errors seen above.

However upon closer inspection, it can be seen that this is in fact nvidia runtime container calling on GitHub CLI and if I try to update the nvidia github apt key -

 $ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - 

Nothing is fixed... :-(

Upon further investigation doing a verbose curl of the nvidia github GPG Key, it can be seen that in fact the TLS Handshake has expired...

$ curl https://nvidia.github.io/nvidia-docker/gpgkey -v
*   Trying 185.199.111.153:443...
...
yadda yadda yadda
...
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< server: GitHub.com
< content-type: application/octet-stream
< permissions-policy: interest-cohort=()
< x-origin-cache: HIT
< last-modified: Tue, 10 Jan 2023 14:15:05 GMT
< access-control-allow-origin: *
< etag: "63bd72e9-c7b"
< expires: Mon, 09 Sep 2024 09:27:33 GMT
< cache-control: max-age=600
< x-proxy-cache: MISS
< x-github-request-id: 5525:7F960:58BA6BC:5A508A7:66DEBD2C
< accept-ranges: bytes
< date: Mon, 09 Sep 2024 15:22:40 GMT
< via: 1.1 varnish
< age: 34
< x-served-by: cache-ams2100137-AMS
< x-cache: HIT
< x-cache-hits: 1
< x-timer: S1725895361.769972,VS0,VE1
< vary: Accept-Encoding
< x-fastly-request-id: 2fb675e40fde53a48c7cfd105d0312e9f3bea7ee
< content-length: 3195

Do you think this stuff might get addressed soon as it will start to impact our ability to keep our Linux VM Estate up to date?

Thanks x

@BarteJK99
Copy link

Check for wget, if not installed, install it

(type -p wget >/dev/null || (sudo apt update && sudo apt-get install wget -y))
&& sudo mkdir -p -m 755 /etc/apt/keyrings

Set keyring path based on existence of /usr/share/keyrings/githubcli-archive-keyring.gpg

If it is in the old location, use that, otherwise always use the new location.

if [ -f /usr/share/keyrings/githubcli-archive-keyring.gpg ]; then
keyring_path="/usr/share/keyrings/githubcli-archive-keyring.gpg"
else
keyring_path="/etc/apt/keyrings/githubcli-archive-keyring.gpg"
fi

echo "replacing keyring at ${keyring_path}"

Download and set up the keyring

wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee "$keyring_path" > /dev/null
&& sudo chmod go+r "$keyring_path"

Idempotently add the GitHub CLI repository as an apt source

echo "deb [arch=$(dpkg --print-architecture) signed-by=$keyring_path] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null

Update the package lists, which should now pass

sudo apt update

@iPrOmar
Copy link
Author

iPrOmar commented Oct 29, 2024

HI @williammartin & @BarteJK99

Thanks for your suggestions & help. I've pointed out in an above comment & in a re-edit of the original issue post what the issue turned out to be. If you would like some closure, I've quoted it below too... 🤗

The issue was due to an Apt repository buried in the cloud-init image for our VMs... It had nothing to do with nvidia... Upon re-building the VM with the latest cloud-init image, the problem was resolved...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants