-
Notifications
You must be signed in to change notification settings - Fork 23
log previous assisted-chat-eval-test container's logs #140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
1a91e8e
b09ceaa
9b262b6
3ebf256
7fc3502
8d89a8f
357bd93
a3201ad
4e31882
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -9,10 +9,10 @@ OCM_TOKEN=$(curl -X POST https://sso.redhat.com/auth/realms/redhat-external/prot | |||||||||||||||||||||||||||||||||||
| -H "Content-Type: application/x-www-form-urlencoded" \ | ||||||||||||||||||||||||||||||||||||
| -d "grant_type=client_credentials" \ | ||||||||||||||||||||||||||||||||||||
| -d "client_id=$CLIENT_ID" \ | ||||||||||||||||||||||||||||||||||||
| -d "client_secret=$CLIENT_SECRET" | jq '.access_token') | ||||||||||||||||||||||||||||||||||||
| -d "client_secret=$CLIENT_SECRET" | jq -r '.access_token') | ||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Good fix: jq -r for raw token; add guard and curl hardening to avoid “null” tokens sneaking through jq -r is correct. However, if the token request returns 4xx/5xx (or a JSON error without access_token), the pipeline can still succeed and you’ll proceed with OCM_TOKEN="null" or empty, causing confusing auth failures later. Recommend: make curl fail on HTTP errors and validate the token right after assignment. Add immediately after the assignment: # Fail early if token is empty or 'null'
if [ -z "${OCM_TOKEN:-}" ] || [ "$OCM_TOKEN" = "null" ]; then
echo "Failed to obtain OCM access token" >&2
exit 1
fiOptionally harden the curl call: # Add --fail-with-body, retries, and timeout for robustness
OCM_TOKEN=$(curl -sS --fail-with-body --retry 3 --retry-delay 2 --max-time 15 \
-X POST https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" | jq -r '.access_token')🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| echo "$OCM_TOKEN" > test/evals/ocm_token.txt | ||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| cd test/evals | ||||||||||||||||||||||||||||||||||||
|
coderabbitai[bot] marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| #python eval.py --agent_endpoint "${AGENT_URL}:${AGENT_PORT}" | ||||||||||||||||||||||||||||||||||||
| python eval.py --agent_endpoint "${AGENT_URL}:${AGENT_PORT}" | ||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainRe-enable eval: prefer python3/-u and ensure token is raw (jq -r) to avoid auth breakage
Apply this diff for the eval invocation: -python eval.py --agent_endpoint "${AGENT_URL}:${AGENT_PORT}"
+python3 -u eval.py --agent_endpoint "${AGENT_URL}:${AGENT_PORT}"Additionally, update the token extraction earlier in the script to write the raw access token (no quotes): Verification checklist:
🏁 Script executed: # Replace the existing assignment with the raw-output variant
OCM_TOKEN=$(curl -sS -X POST https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" | jq -r '.access_token')Length of output: 258 🏁 Script executed: #!/bin/bash
set -e
echo "🔍 Checking OCM_TOKEN assignment in test/prow/entrypoint.sh..."
rg -n --context 3 "OCM_TOKEN" test/prow/entrypoint.sh || true
echo "🔍 Checking python invocation in test/prow/entrypoint.sh..."
rg -n --context 3 "python " test/prow/entrypoint.sh || true
echo "🔍 Checking for 'set -o nounset' in test/prow/entrypoint.sh..."
rg -n --context 3 "set -o nounset" test/prow/entrypoint.sh || trueLength of output: 1149 🏁 Script executed: #!/bin/bash
set -e
echo "🔍 Locating test/evals/eval.py..."
if [ -f test/evals/eval.py ]; then
echo "Found test/evals/eval.py, showing first 5 lines:"
head -n 5 test/evals/eval.py
else
echo "eval.py not found at test/evals/eval.py. Searching for any eval.py:"
find test -type f -name eval.py
fiLength of output: 327 Re-enable eval with unbuffered Python3 and raw token parsing The current
Suggested fixes:
These changes are critical to avoid CI log buffering issues and authentication failures when re-enabling 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty Vertex AI project config will likely break inference provider initialization
The remote::vertexai provider typically requires a non-empty GCP project. Leaving
project: ""will cause provider init or first call to fail. Makeprojecta required parameter and wire it here. Consider parameterizinglocationas well.Apply this diff within this block:
inference: providers: - - provider_id: vertex_ai - provider_type: remote::vertexai - config: - project: "" - location: us-central1 + - provider_id: vertex_ai + provider_type: remote::vertexai + config: + project: ${VERTEX_PROJECT_ID} + location: ${VERTEX_LOCATION}And add template parameters (outside this hunk):
If you prefer to keep
locationhard-coded, only addVERTEX_PROJECT_IDand leavelocation: us-central1.📝 Committable suggestion
🤖 Prompt for AI Agents