-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local AI #7
Comments
Hey, thank you for showing interest in my project. Yes, you should be able to use a local LLM. You just need to add the code for accessing local llm in the However, if you're planning to deploy. I'd suggest you have a look at the price plan of the cloud service you intend to use. Local LLM weights can be quite big in size, hence the billing can shoot up significantly. Let me know if you have any other questions. Thanks |
If I have a local LLM that is being run on a server like localhost, where would I modify |
Also, any chance you can share some screenshots? |
I'm using Windows and LM Studio which lets you start a server on a local port, I just modified the get_model() in utils.py as follows: def get_model(model_name): I wasn't actually on the same computer, my desktop computer had finsight code and was using the LLM on a laptop, so replace the IP address with your localhost, the api_key is not needed and will be ignored for a local LLM. A better solution is to allow selection of openai or local LLM and use the API key if needed. I also modified \1_📊_Finance_Metrics_Review.py so it only asked me for the API key is it wasn't defined. I'm using Visual Studio Code so just put the API keys in my launch.json for debugging so I don't have to enter it all the time. |
Your project caught my attention. Feel free to check out my project on my github as well. Would it be possible to adjust your code to work with a local LLM instead of through gPT-4?
The text was updated successfully, but these errors were encountered: