-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Action may not be able to determine job_id/html_url if run has more than 30 jobs #1
Comments
Sorry for the late responding. Thanks for your report. I'll look into it. |
First of all, thanks for pointing this out @IngoStrauch2020. I have also come to the conclusion that it seems like quick fix of this problem is using Also, after digging further, I found there are two ways to get the current job_id.
I haven't tested yet, but it seems like there are some ways to get the current job name using JavaScript actions. actions/toolkit#550
https://octokit.github.io/rest.js/v18#pagination I would like to support up to 100 jobs first, and then rewrite this action using Octokit. |
I have added the followings.
I'm not having any trouble with Docker at this moment, so not in a hurry to upgrade using Octokit, but I'll make sure I do. |
I'm closing this issue because it is now possible to get the job_id even if there are more than 30 jobs. |
See <Tiryoh/gha-jobid-action#1>: the `Tiryoh/gha-jobid-action` action currently uses GitHub REST API to fetch all jobs for the given workflow run, but it only processes the first page and the default `per_page` is 30. For the record, currently the most recent workflow run <https://github.com/kaitai-io/ci_targets/actions/runs/5696726848> has 39 jobs, so there were 9 jobs which weren't in the first 30 jobs returned by the GitHub REST API, which caused both output variables `job_id` and `html_url` to be `"null"`. The 9 jobs were the following: * perl/5.38-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/8fc6a313ec238507536e4185fcbeb995bc94f389/test_out/perl/ci.json#L9-L10> * php/7.1-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/9bba3f500ebe1e3911d6056fcafde17c6c2ca1a2/test_out/php/ci.json#L9-L10> * php/8.2-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/ed0e41f1958b4ac2e8bbfbca87cf655e7fb84bbd/test_out/php/ci.json#L9-L10> * python/2.7-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/1e8512d15e2a1f8f370e50bebd9909645aee0cbc/test_out/python/ci.json#L9-L10> * python/3.4-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/d98d51959c4c7edd805f7964d6bb84aadeed3cc2/test_out/python/ci.json#L9-L10> * python/3.11-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/2d177a43e526d2caf26a8808c1796e6b87c12a3e/test_out/python/ci.json#L9-L10> * ruby/1.9-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/47791f7a6830511c1958b35294f2b26314e6ccb0/test_out/ruby/ci.json#L1163-L1164> * ruby/2.3-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/bb2faa914d31ac8271125c1e10a4cca8cd527d7c/test_out/ruby/ci.json#L9-L10> * ruby/3.2-linux-x86_64 <https://github.com/kaitai-io/ci_artifacts/blob/0d46fe1d3be4067d871090a7f1056ddab9eb73d1/test_out/ruby/ci.json#L9-L10>
Summary
The jobs API endpoint by default only returns 30 jobs, i.e. currently this action will not be able to determine all job_id/html_url if there are more than 30 jobs in a run.
Details
The API (see https://docs.github.com/en/rest/reference/actions#list-jobs-for-a-workflow-run) has a URL parameter
per_page
so that one can specify how many entries should be returned. There is a hard limit of 100.Suggestions
** this will at least help people with runs that contain 31 to 100 jobs
** use
page
URL parameter of the API depending on the first API responseSo in this example, if
per_page
equals the default value of 30 and the desired job name is not found in this response, perform a second request with?page=2
and check the second response.The text was updated successfully, but these errors were encountered: