-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
source instead of exec in run-readme-pr-macos.yml #1476
base: main
Are you sure you want to change the base?
Conversation
source test commands instead of executing them. (Possible fix for pytorch#1315 )
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1476
Note: Links to docs will display an error until the docs builds have been completed. ❌ 24 New FailuresAs of commit 0e21e95 with merge base 4356b4c (): NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
source instead of exec
somebody pushed all the model exports into exportedModels, but... we never create the directory. we should do that also do this in the user instructions, just because storing into a directory that doesn't exist is not good :)
@Jack-Khuu when it rains it pours, that showed more false positives per #1315 than anybody could anticipate! I added the directory that all the examples use (but don't create!), or we can just removed the directory... |
@Jack-Khuu Ideally we start a run for 1476, and in parallel commit 1409, 1410, 1417, 1439, 1455, 1466. PS: In a nutshell, failures for the doc based runs haven't bubbled up because a failure inside a shell script that's executed with bash did not seem to pass failure information to upstream. Using source to run the multiple layers rectifies this, and may be a pragmatic answer to restoring full test coverage. (I think right now we've cought some of the fails that have not bubbled up to hud.pytorch.org because of the exec/bash dichotomy by eye balling which is not a healthy long term solution. |
Yup, making my way through those CI PR's then we'll rebase this one Our current coverage has plenty of gaps and has honestly been adhoc, so revamping the CI and creating a comprehensive unittest system is a P0 KR for us this half (working on it with @Gasoonjia). Thanks again for grinding through these!! |
pip3 not found. I guess we do conda for this environment. That's interesting. How do we deal with conda like that. or is it just pip vs pip3. ( |
multimodal doc needed end of tests comment.
Need to download files before using them, lol. We expect the users to do this, but we should verbalize. Plus, if we extract for testing, then it obviously fails.
( triggers unexpected token in macos zsh
# metadata does not install properly on macos # .ci/scripts/run-docs multimodal
# metadata does not install properly on macos # .ci/scripts/run-docs multimodal
https://hud.pytorch.org/pr/pytorch/torchchat/1476#36153855033
|
install wget
echo ".ci/scripts/run-docs native DISABLED" # .ci/scripts/run-docs native
echo ".ci/scripts/run-docs native DISABLED" # .ci/scripts/run-docs native
pip3 command not found. This is called from
|
Looks like #1362 for the mismatched group size is finally marked as failing properly cc: @Gasoonjia |
Some issues: we can't find pip3 and/or conda. https://github.com/pytorch/torchchat/actions/runs/12996559809/job/36252363658
https://ossci-raw-job-status.s3.amazonaws.com/log/pytorch/torchchat/36252362180
https://ossci-raw-job-status.s3.amazonaws.com/log/pytorch/torchchat/36252360312
|
The following is an issue specific to the use of stories... because the features aren't multiple of 256 groupsize. Originally, I had included padding or otherwise support for handling this (Embeddings quantization just handles an "unfull" group for example). Since moving to torchao we're insisting on the multiplicity of the features size and group size.
Options: I'll assume that (3) might take a while for discussion and implementation and going with (1) or (2) is probably the pragmatic solution. (With the caveat that (2) won't test gs=256, but it may be the quickest to implement and not sure what the smallest model for (1) is. (I haven't looked at Stories 110M which may be an acceptable stand-in re: feature sizes being multiple of 256, although it will drive up runtime of our tests....) Resolved via (2) for now |
switch to gs=32 quantization (requires consolidated run-docs of pytorch#1439)
add gs=32 cuda quantization for use w/ stories15M
add gs=32 for stories15M
test-advanced-any
|
Comment out tests that currently fail, as per summary in PR comments
source test commands instead of executing them.
(Possible fix for #1315 )