Replies: 4 comments
-
Wow that's a lot of questions you're arising here. While not of them I'm able to cover during the short response, but few things I definitely capable to elaborate.
|
Beta Was this translation helpful? Give feedback.
-
I am sorry for asking so many questions, I will tone that back.. I knew I was doing something wrong in my usage - now I know what/how/why. I will go forward using Projects in Sublime-Text much more than i have been, with one AiChat tab per Project. I will also "reset history" whenever its a topic change, within each Project... rather than just continuing on for thousands of lines, across multiple topics. That will help alot. |
Beta Was this translation helpful? Give feedback.
-
Nah, feel free to asking all the kind of questions are arising on your side. It's just that I'd skip those of them that are far beyond of the scope of the plugin (like the in depth knowledge of how llm implanted). However to highlight you and all the rest of the folks one approach to utilise this plugin at its best I'd like to add the following: Personally I found that using the plugin with either
All but 1 point happening within the phantom i.e. non-persistent mode because it's just doesn't improve the helpfulness of the model in some observable way. The other thing to mention is that in current develop branch there's code that makes the provided pipeline is a way more convenient, but it's far from being ready for release yet. |
Beta Was this translation helpful? Give feedback.
-
Looks likely I will "adopt" 😁 a pre-loved mac mini M4 with 24gb ram (only 256gb of disk) in next week or so. My plan is to install one of the MLX models on it: https://huggingface.co/models?library=mlx Even using MLX the My reading of that page is model was converted to use MLX and then shrunk to 8-bit, which reduces it to only 19.8B params - so maybe it will run for me. If not I will go back to a smaller 3.2 model or a coding specific model from that MLX page. Note: XAi's Grok improved DRAMATICALLY a few days ago. Only since 12 Dec 2024, I have not yet plugin tried |
Beta Was this translation helpful? Give feedback.
-
Conversational continuity works great in this plugin. This question is more about understanding how you do it, and how the LLM does it, so I can adapt what I do....
It is a bit confusing for me, I have no idea how you actually make it so in your code. When I exercise th API myself each prompt starts afresh - absolutely no knowledge of what has gone before (no matter what it claims). I thought (clearly wrongly) that the API KEY was enough to have the LLM keep a memory of previous conversations. So I thought (wrongly) that I could manage a few different conversations concurrently simply by having a few different keys. Well that's unworkable... And, anyway OpenAI-sublime-text is my preferred way.
What I would like is a few different tabs open (not just one called AIChat) , and each one with different conversation or topic - that is being remembered. I am not sure how to do this using this plugin - or if it is even possible . Any thoughts on how I should adapt my USAGE to come closest to this. For example - each MODEL - to output to a different tab and independently keep its own conversational memory. Then I would have one tab as a discussion tab and another tab as code-only.
About "reset chat history" - I know it clears all the contents in the Tab. I assume it also resets all memory of the conversation with the LLM, so the LLM forgets everything from that conversation also ? Any more details on what that does would be appreciated. Again I am just trying to understand to adjust my usage. At first I just let it run (all in same tab) for a few thousand lines. IS that problematic ? Should it be reset from time to time ? What about letting it run indefinitely into the same tab ? It is just a file, right ?, so it is only the usual issues about Sublime handling it and memory usage, right ? OR is some of that file being sent each time (with the prompt) to the LLM ? Some google searches say programmer has to manage the sessions themselves - so I really don't know how to think through this...
Beta Was this translation helpful? Give feedback.
All reactions