Conversation
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
tdoublep
pushed a commit
that referenced
this pull request
Jan 20, 2025
This PR cleans and simplifies the code. ### Changes: - simplified warmup by using a function call to remove duplicated lines - moving mask and position_ids from `SENDNNCasualLM` to `SENDNNModelRunner` - fixing error in pyproject.toml - already merged PR #52 and main into this branch for easier merge. The code has been in client/server mode for the `llama 194m` and `granite 3b` on `AIU` and `CPU`.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR enables the Spyre tests to run as a Github action.
I realized that the model we were using for the tests
llama-194mis not available on HF hub, but if we want to run the tests externally we need to use some model that is available. I've replaced it with this one: https://huggingface.co/JackFram/llama-160mNote I haven't actually changed the model name in the tests, I just "hacked" it for now using a soft link in the docker container. This is because there is some ongoing work to introduce environment variables to control the tests and I don't want to complicate things.
For this model I see some quite weird behaviour where the tokens produced by vLLM and HF Transformers are identical but the decode text is slightly different (they are the same up to a leading space). I don't think this difference is related to Spyre so I've just changed the test to compare token ids instead.