-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Frank/ollama module #1099
base: develop
Are you sure you want to change the base?
Frank/ollama module #1099
Conversation
✅ Deploy Preview for testcontainers-dotnet ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. I had a quick look and added some minor suggestions and improvements.
public async Task StartOllamaAsync() | ||
{ | ||
if (State!= TestcontainersStates.Created && State != TestcontainersStates.Running) { | ||
throw new InvalidOperationException("Cannot start a container that has not been created."); | ||
} | ||
Task.WaitAll(ExecAsync(new List<string>() | ||
{ | ||
"ollama", "run", ModelName, | ||
})); | ||
|
||
await Task.CompletedTask; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to pass the model name as an argument to the member? Run(string modelName, CancellationToken ct = default)
? In addition, please do not block the thread; simply return ExecAsync(["ollama", "run", modelName], ct);
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed and resolved about the return. I added two methods one taking only the ct and one taking name and ct. Have a look if you agree
This commit updates the Ollama configuration to allow more customization options and removes unnecessary test helpers. The OllamaConfiguration class was refactored to provide more configurable parameters such as the VolumePath and VolumeName. Additionally, the TestOutputHelperExtensions and TestOutputLogger classes were deleted as they were not providing any significant value.
@HofmeisterAn I'm updating the branch with my current state as the tests were green on local, but I think it needs another pass. No Rush, I'm busy with my day-job so, no need for you to rush 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution. Testcontainers for Java and Go has recently release a new Ollama module, which allows to detect and enable GPU and allow to commit the container's state and create a new Image. It would be nice to align the .NET implementation along with the others. See java implementation.
/// <summary> | ||
/// The default model name for the OllamaBuilder. | ||
/// </summary> | ||
public const string DefaultModelName = OllamaModels.Llama2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Models are just like images and should be provided as a string rather than having an specific type. It will also reduce the amount of PRs just to keep it up-to-date
/// <summary> | ||
/// Default volume path. | ||
/// </summary> | ||
public const string DefaultVolumePath = "/root/.ollama"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Volume mounting should be optional rather than the default, at least for now. We can align along with Java and Go implementation.
What does this PR do?
Added a new Module project that uses OLLAMA docker images to give local LLM capabilities. This only supports CPU workload, but adding the env. variables to activate GPU if the prerequisites are there, can be done and so CUDA core based processing isn't precluded
Why is it important?
This module is very powerful and can give a lot of value to those that are incorporating OLLAMA models in their projects
Related issues
Link related issues below. Insert the issue link or reference after the word "Closes" if merging this should automatically close it.
How to test this PR
Run the test added
Follow-ups