-
Notifications
You must be signed in to change notification settings - Fork 50
Add ollama support #242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add ollama support #242
Conversation
Support ollama runner commands with GPU variants Add ollama model detection and runner initialization Add comprehensive tests for ollama functionality Add comprehensive documentation for ollama integration Co-authored-by: ericcurtin <[email protected]>
Reviewer's GuideThis PR integrates Ollama as an alternative model runner by introducing a global Sequence diagram for pulling an Ollama modelsequenceDiagram
actor User
participant CLI
participant "Ollama Runner Manager"
participant "Docker Engine"
User->>CLI: docker model pull ollama.com/library/smollm:135m
CLI->>"Ollama Runner Manager": isOllamaModel(model)
"Ollama Runner Manager"->>"Docker Engine": ensureOllamaRunnerAvailable()
"Docker Engine"-->>"Ollama Runner Manager": Ollama runner available
CLI-->>User: Error: ollama model pull not yet implemented
CLI-->>User: Suggest docker exec docker-ollama-runner ollama pull smollm:135m
Class diagram for runner type and resource helpersclassDiagram
class RunnerOptions {
+bool ollama
+string gpuMode
+bool doNotTrack
+bool pullImage
+bool pruneContainers
}
class CleanupOptions {
+bool ollama
+bool models
+bool removeImages
}
class Standalone {
+EnsureControllerImage()
+EnsureOllamaImage()
+EnsureModelStorageVolume()
+EnsureOllamaStorageVolume()
+PruneControllerContainers()
+PruneOllamaControllerContainers()
+PruneControllerImages()
+PruneOllamaImages()
+FindControllerContainer()
+FindOllamaControllerContainer()
+CreateControllerContainer()
+CreateOllamaControllerContainer()
}
RunnerOptions <.. Standalone
CleanupOptions <.. Standalone
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary of ChangesHello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Docker Model Runner CLI by introducing first-class support for Ollama. Users can now seamlessly install, configure, and manage Ollama as an alternative model runner, leveraging its model format and API. The changes include new command-line flags for runner management, intelligent GPU detection for both NVIDIA and AMD hardware, and preliminary integration for Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- There’s a lot of duplicated opts.ollama branching in runInstallOrStart and related commands—consider abstracting runner-specific logic behind a common interface or helper to reduce copy-pasted code.
- The new ensureOllamaRunnerAvailable function mirrors much of the standalone initializer; consider merging them into a single ensureRunnerAvailable(ctx, runnerType) helper to avoid duplication and keep initialization consistent.
- The TODO placeholders in pull.go and run.go currently error out instructing users to docker exec manually; improving UX by proxying pull/run commands to the ollama daemon directly would make the integration more seamless.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- There’s a lot of duplicated opts.ollama branching in runInstallOrStart and related commands—consider abstracting runner-specific logic behind a common interface or helper to reduce copy-pasted code.
- The new ensureOllamaRunnerAvailable function mirrors much of the standalone initializer; consider merging them into a single ensureRunnerAvailable(ctx, runnerType) helper to avoid duplication and keep initialization consistent.
- The TODO placeholders in pull.go and run.go currently error out instructing users to docker exec manually; improving UX by proxying pull/run commands to the ollama daemon directly would make the integration more seamless.
## Individual Comments
### Comment 1
<location> `cmd/cli/commands/install-runner_test.go:70` </location>
<code_context>
}
}
+func TestInstallRunnerOllamaFlag(t *testing.T) {
+ cmd := newInstallRunner()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding negative and edge case tests for the --ollama flag.
Please add tests for invalid flag values and cases where the flag is omitted or set to false, to improve coverage of flag parsing and error handling.
</issue_to_address>
### Comment 2
<location> `cmd/cli/commands/install-runner_test.go:105` </location>
<code_context>
+ }
+}
+
+func TestInstallRunnerGPUFlag(t *testing.T) {
+ cmd := newInstallRunner()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for all supported GPU flag values and invalid input.
Please include test cases for 'auto', 'cuda', 'none', and an invalid value to fully cover the flag's behavior.
</issue_to_address>
### Comment 3
<location> `cmd/cli/commands/reinstall-runner_test.go:50-52` </location>
<code_context>
}
}
+func TestReinstallRunnerOllamaFlag(t *testing.T) {
+ cmd := newReinstallRunner()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add negative and edge case tests for the --ollama flag in reinstall-runner.
Include tests for invalid flag values, explicit false, and absence of the flag to match the coverage in install-runner tests.
```suggestion
func TestReinstallRunnerOllamaFlag(t *testing.T) {
cmd := newReinstallRunner()
// Test absence of --ollama flag (should use default value)
cmd.SetArgs([]string{})
err := cmd.Execute()
if err != nil {
t.Errorf("Unexpected error when --ollama flag is absent: %v", err)
}
ollamaFlag := cmd.Flags().Lookup("ollama")
if ollamaFlag == nil {
t.Errorf("Expected flag '--ollama' not found")
}
// Check default value (assuming default is false, adjust if needed)
if ollamaFlag.Value.String() != "false" {
t.Errorf("Expected default value of --ollama to be 'false', got '%s'", ollamaFlag.Value.String())
}
// Test --ollama=true
cmd = newReinstallRunner()
cmd.SetArgs([]string{"--ollama=true"})
err = cmd.Execute()
if err != nil {
t.Errorf("Unexpected error when --ollama=true: %v", err)
}
ollamaFlag = cmd.Flags().Lookup("ollama")
if ollamaFlag.Value.String() != "true" {
t.Errorf("Expected value of --ollama to be 'true', got '%s'", ollamaFlag.Value.String())
}
// Test --ollama=false
cmd = newReinstallRunner()
cmd.SetArgs([]string{"--ollama=false"})
err = cmd.Execute()
if err != nil {
t.Errorf("Unexpected error when --ollama=false: %v", err)
}
ollamaFlag = cmd.Flags().Lookup("ollama")
if ollamaFlag.Value.String() != "false" {
t.Errorf("Expected value of --ollama to be 'false', got '%s'", ollamaFlag.Value.String())
}
// Test invalid value for --ollama
cmd = newReinstallRunner()
cmd.SetArgs([]string{"--ollama=notabool"})
err = cmd.Execute()
if err == nil {
t.Errorf("Expected error when --ollama is set to an invalid value, got nil")
}
}
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds comprehensive support for using ollama as an alternative model runner, which is a great new feature. The changes include new CLI flags, GPU variant handling, and automatic management of containers and volumes for the ollama runner. The documentation and tests are thorough and well-written.
I've identified a critical issue with the auto-installation logic for the ollama runner that will prevent it from starting correctly. Additionally, there are a few areas with code duplication in the command implementations that could be refactored for better maintainability, and a couple of minor issues with user-facing error messages. My detailed feedback is in the comments below.
| if err := waitForStandaloneRunnerAfterInstall(ctx); err != nil { | ||
| return nil, err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The waitForStandaloneRunnerAfterInstall function is not suitable for waiting for the ollama runner. This function uses the global desktopClient, which is configured for the default model runner's port (e.g., 12434), not the ollama runner's port (11434). This will cause the installation to hang and eventually time out, preventing the ollama runner from being installed correctly.
A dedicated waiting logic that polls the ollama port is required.
(You will also need to add "net/http" to your imports for this suggestion to work).
| if err := waitForStandaloneRunnerAfterInstall(ctx); err != nil { | |
| return nil, err | |
| } | |
| if err := func() error { | |
| url := fmt.Sprintf("http://127.0.0.1:%d", standalone.DefaultOllamaPort) | |
| for tries := installWaitTries; tries > 0; tries-- { | |
| resp, err := http.Get(url) | |
| if err == nil { | |
| resp.Body.Close() | |
| if resp.StatusCode == http.StatusOK { | |
| return nil | |
| } | |
| } | |
| select { | |
| case <-time.After(installWaitRetryInterval): | |
| case <-ctx.Done(): | |
| return errors.New("cancelled waiting for ollama runner to initialize") | |
| } | |
| } | |
| return errors.New("ollama runner took too long to initialize") | |
| }(); err != nil { | |
| return nil, err | |
| } |
| // If pruning containers (reinstall), remove any existing model runner containers. | ||
| if opts.pruneContainers { | ||
| if err := standalone.PruneControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner container(s): %w", err) | ||
| if opts.ollama { | ||
| if err := standalone.PruneOllamaControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove ollama runner container(s): %w", err) | ||
| } | ||
| } else { | ||
| if err := standalone.PruneControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner container(s): %w", err) | ||
| } | ||
| } | ||
| } else { | ||
| // Check if an active model runner container already exists (install only). | ||
| if ctrID, ctrName, _, err := standalone.FindControllerContainer(cmd.Context(), dockerClient); err != nil { | ||
| var ctrID, ctrName string | ||
| var err error | ||
| if opts.ollama { | ||
| ctrID, ctrName, _, err = standalone.FindOllamaControllerContainer(cmd.Context(), dockerClient) | ||
| } else { | ||
| ctrID, ctrName, _, err = standalone.FindControllerContainer(cmd.Context(), dockerClient) | ||
| } | ||
| if err != nil { | ||
| return err | ||
| } else if ctrID != "" { | ||
| runnerType := "Model Runner" | ||
| if opts.ollama { | ||
| runnerType = "ollama runner" | ||
| } | ||
| if ctrName != "" { | ||
| cmd.Printf("Model Runner container %s (%s) is already running\n", ctrName, ctrID[:12]) | ||
| cmd.Printf("%s container %s (%s) is already running\n", runnerType, ctrName, ctrID[:12]) | ||
| } else { | ||
| cmd.Printf("Model Runner container %s is already running\n", ctrID[:12]) | ||
| cmd.Printf("%s container %s is already running\n", runnerType, ctrID[:12]) | ||
| } | ||
| return nil | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function contains a significant amount of duplicated logic within if opts.ollama blocks for handling different runner types. This reduces readability and maintainability.
You can refactor this by defining runner-specific configurations at the beginning of the function and then using a single, unified code path. The standalone package has already been refactored with generic functions like PruneControllerContainersByType and FindControllerContainerByType that you can leverage here.
This same pattern of duplication appears later in the function when ensuring images and volumes, and creating the container, which could also be refactored similarly.
runnerTypeStr := "Model Runner"
runnerTypeLabel := "model-runner"
pruneErrMsg := "unable to remove model runner container(s): %w"
if opts.ollama {
runnerTypeStr = "ollama runner"
runnerTypeLabel = "ollama"
pruneErrMsg = "unable to remove ollama runner container(s): %w"
}
// If pruning containers (reinstall), remove any existing model runner containers.
if opts.pruneContainers {
if err := standalone.PruneControllerContainersByType(cmd.Context(), dockerClient, false, cmd, runnerTypeLabel); err != nil {
return fmt.Errorf(pruneErrMsg, err)
}
} else {
// Check if an active model runner container already exists (install only).
ctrID, ctrName, _, err := standalone.FindControllerContainerByType(cmd.Context(), dockerClient, runnerTypeLabel)
if err != nil {
return err
} else if ctrID != "" {
if ctrName != "" {
cmd.Printf("%s container %s (%s) is already running\n", runnerTypeStr, ctrName, ctrID[:12])
} else {
cmd.Printf("%s container %s is already running\n", runnerTypeStr, ctrID[:12])
}
return nil
}
}| } | ||
| // TODO: Implement ollama-specific pull logic that communicates | ||
| // with the ollama daemon on port 11434 | ||
| return fmt.Errorf("ollama model pull not yet implemented - please use 'docker exec docker-ollama-runner ollama pull %s'", model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suggested docker exec command in the error message is incorrect. It includes the ollama.com/ prefix in the model name, but the ollama CLI inside the container does not expect this prefix. This will cause confusion and the command will fail for the user.
The model name should have the ollama.com/ prefix stripped before being included in the error message.
(You will also need to add "strings" to your imports for this suggestion to work).
| return fmt.Errorf("ollama model pull not yet implemented - please use 'docker exec docker-ollama-runner ollama pull %s'", model) | |
| return fmt.Errorf("ollama model pull not yet implemented - please use 'docker exec docker-ollama-runner ollama pull %s'", strings.TrimPrefix(model, "ollama.com/")) |
| } | ||
| // TODO: Implement ollama-specific run logic that communicates | ||
| // with the ollama daemon on port 11434 | ||
| return fmt.Errorf("ollama model run not yet implemented - please use 'docker exec -it docker-ollama-runner ollama run %s'", model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the pull command, the suggested docker exec command in the error message is incorrect. It includes the ollama.com/ prefix, which the ollama run command inside the container does not expect. This will cause the suggested command to fail for the user.
The prefix should be stripped from the model name.
| return fmt.Errorf("ollama model run not yet implemented - please use 'docker exec -it docker-ollama-runner ollama run %s'", model) | |
| return fmt.Errorf("ollama model run not yet implemented - please use 'docker exec -it docker-ollama-runner ollama run %s'", strings.TrimPrefix(model, "ollama.com/")) |
| // Remove any model runner containers. | ||
| if err := standalone.PruneControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner container(s): %w", err) | ||
| if opts.ollama { | ||
| if err := standalone.PruneOllamaControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove ollama runner container(s): %w", err) | ||
| } | ||
| } else { | ||
| if err := standalone.PruneControllerContainers(cmd.Context(), dockerClient, false, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner container(s): %w", err) | ||
| } | ||
| } | ||
|
|
||
| // Remove model runner images, if requested. | ||
| if opts.removeImages { | ||
| if err := standalone.PruneControllerImages(cmd.Context(), dockerClient, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner image(s): %w", err) | ||
| if opts.ollama { | ||
| if err := standalone.PruneOllamaImages(cmd.Context(), dockerClient, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove ollama image(s): %w", err) | ||
| } | ||
| } else { | ||
| if err := standalone.PruneControllerImages(cmd.Context(), dockerClient, cmd); err != nil { | ||
| return fmt.Errorf("unable to remove model runner image(s): %w", err) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function contains duplicated logic for handling ollama and default runners, which makes it verbose and harder to maintain. You can simplify this by abstracting the runner-specific calls into function variables, which will clean up the code and reduce duplication.
var pruneContainersFunc func(context.Context, client.ContainerAPIClient, bool, standalone.StatusPrinter) error
var pruneImagesFunc func(context.Context, client.ImageAPIClient, standalone.StatusPrinter) error
var pruneContainersErrMsg, pruneImagesErrMsg string
if opts.ollama {
pruneContainersFunc = standalone.PruneOllamaControllerContainers
pruneImagesFunc = standalone.PruneOllamaImages
pruneContainersErrMsg = "unable to remove ollama runner container(s): %w"
pruneImagesErrMsg = "unable to remove ollama image(s): %w"
} else {
pruneContainersFunc = standalone.PruneControllerContainers
pruneImagesFunc = standalone.PruneControllerImages
pruneContainersErrMsg = "unable to remove model runner container(s): %w"
pruneImagesErrMsg = "unable to remove model runner image(s): %w"
}
// Remove any model runner containers.
if err := pruneContainersFunc(cmd.Context(), dockerClient, false, cmd); err != nil {
return fmt.Errorf(pruneContainersErrMsg, err)
}
// Remove model runner images, if requested.
if opts.removeImages {
if err := pruneImagesFunc(cmd.Context(), dockerClient, cmd); err != nil {
return fmt.Errorf(pruneImagesErrMsg, err)
}
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Add initial ollama runner support alongside the existing model-runner, including container/image/volume management, CLI flags, and basic model detection.
- Introduces runner typing via labels to distinguish model-runner vs ollama resources.
- Adds CLI flags and helper paths to install/start/stop/reinstall/uninstall the ollama runner, with tests and docs.
- Implements image/volume/container helpers for ollama, plus basic ollama model detection.
Reviewed Changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| cmd/cli/pkg/standalone/volumes.go | Generalizes volume ensure logic and adds an ollama storage volume. |
| cmd/cli/pkg/standalone/ports.go | Adds default ollama port 11434. |
| cmd/cli/pkg/standalone/labels.go | Adds label and constants to differentiate runner types. |
| cmd/cli/pkg/standalone/images.go | Refactors image pull logic and adds prune/pull for ollama images. |
| cmd/cli/pkg/standalone/controller_image.go | Adds ollama image name/version/variant helpers. |
| cmd/cli/pkg/standalone/containers.go | Adds ollama container helpers and runner-type-aware find/prune/create. |
| cmd/cli/pkg/gpu/gpu.go | Adds GPUSupportROCm enum. |
| cmd/cli/commands/utils_test.go | Tests ollama model detection helper. |
| cmd/cli/commands/utils.go | Adds isOllamaModel helper. |
| cmd/cli/commands/uninstall-runner.go | Adds --ollama support and image pruning path. |
| cmd/cli/commands/stop-runner.go | Adds --ollama flag. |
| cmd/cli/commands/start-runner.go | Adds --ollama flag and GPU help updates. |
| cmd/cli/commands/run.go | Routes ollama models to the ollama runner path (not yet implemented). |
| cmd/cli/commands/restart-runner.go | Adds --ollama flag and help updates. |
| cmd/cli/commands/reinstall-runner_test.go | Tests for --ollama flag presence/behavior. |
| cmd/cli/commands/reinstall-runner.go | Adds --ollama flag and help updates. |
| cmd/cli/commands/pull.go | Routes ollama models to the ollama runner path (placeholder error for now). |
| cmd/cli/commands/list.go | Adds TODO for listing ollama models. |
| cmd/cli/commands/install-runner_test.go | Adds tests for --ollama and GPU=rocm flags. |
| cmd/cli/commands/install-runner.go | Implements ensureOllamaRunnerAvailable and integrates --ollama throughout. |
| README.md | Documents ollama integration, usage, and GPU guidance. |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| // Note: CUDA GPUs use the base "latest" image | ||
| if detectedGPU == gpupkg.GPUSupportCUDA { | ||
| return "" // ollama/ollama:latest works for CUDA | ||
| } |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ollamaImageVariant never returns 'rocm' for AMD GPUs, so the ROCm image is not selected in auto paths (e.g., ensureOllamaRunnerAvailable). Return 'rocm' when detectedGPU == GPUSupportROCm to select the correct image variant.
| } | |
| } | |
| if detectedGPU == gpupkg.GPUSupportROCm { | |
| return "rocm" | |
| } |
| // Poll until we get a response from the ollama runner. | ||
| // Note: We reuse the same wait logic, assuming ollama responds similarly | ||
| if err := waitForStandaloneRunnerAfterInstall(ctx); err != nil { | ||
| return nil, err |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
waitForStandaloneRunnerAfterInstall checks the Desktop model-runner status, not the ollama container, so ollama readiness will likely time out even when the daemon is up. Implement a dedicated readiness check (e.g., try HTTP to http://127.0.0.1:11434/api/tags or poll the container state) and use it here instead.
| // GPUSupportROCm indicates ROCm GPU support (AMD). | ||
| GPUSupportROCm |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GPUSupportROCm is defined but ProbeGPUSupport never returns it, so 'auto' detection will not select ROCm. Extend ProbeGPUSupport to detect ROCm (e.g., presence of /dev/kfd and/or rocm-smi) and return GPUSupportROCm when applicable.
| printer.Printf("Creating model storage volume %s...\n", modelStorageVolumeName) | ||
| volume, err := dockerClient.VolumeCreate(ctx, volume.CreateOptions{ | ||
| Name: modelStorageVolumeName, | ||
| printer.Printf("Creating model storage volume %s...\n", volumeName) |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] This message is used for both model-runner and ollama volumes; it can be confusing when creating the ollama volume. Consider a generic message like 'Creating storage volume %s...' or include the runner type in the message.
| printer.Printf("Creating model storage volume %s...\n", volumeName) | |
| printer.Printf("Creating %s storage volume %s...\n", runnerType, volumeName) |
| // Start the container. | ||
| printer.Printf("Starting model runner container %s...\n", controllerContainerName) | ||
| if err := ensureContainerStarted(ctx, dockerClient, controllerContainerName); err != nil { | ||
| printer.Printf("Starting model runner container %s...\n", containerName) |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The start message always says 'model runner' even for ollama containers. Since createControllerContainerInternal receives runnerType, prefer printer.Printf(\"Starting %s container %s...\\n\", runnerType, containerName) to reflect the correct runner.
| printer.Printf("Starting model runner container %s...\n", containerName) | |
| printer.Printf("Starting %s container %s...\n", runnerType, containerName) |
| _ = dockerClient.ContainerRemove(ctx, resp.ID, container.RemoveOptions{Force: true}) | ||
| } | ||
| return fmt.Errorf("failed to start container %s: %w", controllerContainerName, err) | ||
| return fmt.Errorf("failed to start container %s: %w", containerName, err) |
Copilot
AI
Oct 15, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The start message always says 'model runner' even for ollama containers. Since createControllerContainerInternal receives runnerType, prefer printer.Printf(\"Starting %s container %s...\\n\", runnerType, containerName) to reflect the correct runner.
Support ollama runner commands with GPU variants
Add ollama model detection and runner initialization
Add comprehensive tests for ollama functionality
Add comprehensive documentation for ollama integration
Summary by Sourcery
Add support for ollama as an alternative model runner, including a new --ollama flag across runner commands, GPU variant handling, and automatic container/image/volume lifecycle management
New Features:
Enhancements:
Documentation:
Tests: