Skip to content

Comments

memory: add is_iswa for memory_hybrid#18640

Closed
ngxson wants to merge 1 commit intoggml-org:masterfrom
ngxson:xsn/mem_hybrid_iswa
Closed

memory: add is_iswa for memory_hybrid#18640
ngxson wants to merge 1 commit intoggml-org:masterfrom
ngxson:xsn/mem_hybrid_iswa

Conversation

@ngxson
Copy link
Collaborator

@ngxson ngxson commented Jan 6, 2026

Alternative to #18601 , reuse llama_memory_hybrid instead of duplicating the class

(I haven't tested it yet, just pushing this PR for discussion)

@ggerganov
Copy link
Member

I prefer the #18601 approach. Indeed, it is duplicating significant chunk of the code, but here we still have to add the new llm_graph_input_mem_hybrid_iswa, or alternatively - add the extra is_iswa logic there in a similar way. On the other hand, the logic of each class in #18061 is more straightforward and easier to follow.

My recommendation is to proceed with llama_memory_hybrid_iswa for now.

@ngxson
Copy link
Collaborator Author

ngxson commented Jan 6, 2026

Got it, thanks! let's go with the other PR then

@ngxson ngxson closed this Jan 6, 2026
@ggerganov
Copy link
Member

There might be a simplification where instead of having both llama_kv_cache and llama_kv_cache_iswa, we keep just llama_kv_cache_iswa since it is technically a superset of llama_kv_cache functionality. But we can do this simplification after this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants