-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama3.2 on iPhone 16 generates repeated, bad responses #7156
Comments
@fighting300 thanks for letting us know about the problem. What steps can we follow to build the same version of the app that you're using?
|
Also, do you get the same results for command line generation? |
Hi, we've had a similar issue - it is likely that you are using the non-instruct version of LLaMa weights :D at least that was the fix in our case |
Thanks for the tip @chmjkb! @fighting300 what version of the weights are you using here? |
I follow the instruction readme in llama directory |
I use llama 3.2-1B lightweight |
@fighting300 there are two versions of 3.2-1B lightweight, could you use the one with instruct in the name like @chmjkb mentioned? |
🐛 Describe the bug
Running llama3.2 results in an error on iphone 16, making conversation impossible.
Versions
iphone: 16
os:iOS 18.1
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.10.15 (main, Sep 7 2024, 00:20:06) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-15.0.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] executorch==0.4.0a0+6a085ff
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.21.3
[pip3] torch==2.2.2
[pip3] torchao==0.7.0+git75d06933
[pip3] torchaudio==2.2.2
[pip3] torchsr==1.0.4
[pip3] torchvision==0.17.2
[conda] executorch 0.4.0a0+6a085ff pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 2.1.3 pypi_0 pypi
[conda] numpydoc 1.7.0 py312hecd8cb5_0 defaults
[conda] torch 2.2.2 pypi_0 pypi
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
cc @mergennachin @cccclai @helunwencser @dvorjackz
The text was updated successfully, but these errors were encountered: