You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry I am seeing these issues late. The way to view these classes are:
LLMQAModel: A model that takes a question and generates an answer using a generator (which can be GPT3Generator or LLMClientGenerator)
LMGenerator is a class that could be used to load HF models like ChatGLM2. Unfortunately we stopped using this class and instead relied on a client-server approach. Basically you can use the LLMClientGenerator and point it to a LLMServer (https://github.com/HarshTrivedi/llm_server) This has two advantages: removes the dependency on HF for this code (which often has to be updated) and doesn't require each model to be loaded within DecomP.
So with regards to your question about using a different model, you have two choices:
Use the LLM server code to start a server with your preferred model and change your configs to point to this server. E.g.
You may need to update the server code to handle newer models
Modify the DecomP code to use LMGenerator and also update the LMGenerator code to use the latest HF Transformers code needed for your model. This will load the model each time you run an experiment and would need a GPU machine for experiments.
Please let me know if either of these directions work for you. I can help you with the changes needed to support other models.
How to replace GPT-3 with my own ChatGLM2-6B model?
The text was updated successfully, but these errors were encountered: