fix: add missing return in load_model_with_fallback#230
fix: add missing return in load_model_with_fallback#230sjswerdloff wants to merge 1 commit intowaybarrios:mainfrom
Conversation
When mlx_lm.load() succeeds without raising ValueError, the function
falls through without returning the (model, tokenizer) tuple, causing
a TypeError in the caller ('cannot unpack non-iterable NoneType').
This affects any model that loads successfully on the first try
(the common case). The ValueError catch paths all have returns,
but the success path was missing one.
|
This is a duplicate of #237. The same issue applies here, line 54 already has line 54: return load(model_name, tokenizer_config=tokenizer_config) # success
line 59: return _load_with_tokenizer_fallback(model_name) # TokenizersBackend fallback
line 68: return _load_strict_false(model_name, tokenizer_config) # extra weights fallback
line 69: raise # unhandled ValueError
|
|
Worth noting that #235 and #237 are independent submissions of the same one-line fix. Three people hit this bug and submitted PRs within 3 days of each other because nothing was merged. If review bandwidth is the bottleneck, I am happy to help triage incoming PRs or act as a community reviewer. I have production experience with the codebase (122B on M2 Ultra, continuous batching, MTP) and already review most new submissions. Let me know if that would be useful. |
|
sounds good @Thump604 |
Thump604
left a comment
There was a problem hiding this comment.
One-line fix, correct. Merging.
When mlx_lm.load() succeeds without raising ValueError, the function falls through without returning the (model, tokenizer) tuple, causing a TypeError in the caller ('cannot unpack non-iterable NoneType').
This affects any model that loads successfully on the first try (the common case). The ValueError catch paths all have returns, but the success path was missing one.