You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/proposals/2-trainer-local-execution/README.md
+12-2Lines changed: 12 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,38 +14,44 @@ AI Practitioners often want to experiment locally before scaling their models to
14
14
The proposed local execution mode will allow engineers to quickly test their models in isolated containers or virtualenvs via subprocess, facilitating a faster and more efficient workflow.
15
15
16
16
### Goals
17
+
17
18
- Allow users to run training jobs on their local machines using container runtimes or subprocess.
18
19
- Rework current Kubeflow Trainer SDK to implement Execution Backends with Kubernetes Backend as default.
19
20
- Implement Local Execution Backends that integrates seamlessly with the Kubeflow SDK, supporting both single-node and multi-node training processes.
20
21
- Provide an implementation that supports PyTorch, with the potential to extend to other ML frameworks or runtimes.
21
22
- Ensure compatibility with existing Kubeflow Trainer SDK features and user interfaces.
22
23
23
24
### Non-Goals
25
+
24
26
- Full support for distributed training in the first phase of implementation.
25
27
- Support for all ML frameworks or runtime environments in the initial proof-of-concept.
26
28
- Major changes to the Kubeflow Trainer SDK architecture.
27
29
28
30
## Proposal
29
31
30
-
The local execution mode will allow users to run training jobs in container runtime environment on their local machines, mimicking the larger Kubeflow setup but without requiring Kubernetes.
32
+
The local execution mode will allow users to run training jobs in container runtime environment on their local machines, mimicking the larger Kubeflow setup but without requiring Kubernetes.
31
33
32
34

33
35
34
36
### User Stories (Optional)
35
37
36
38
#### Story 1
39
+
37
40
As an AI Practitioner, I want to run my model locally using Podman/Docker containers so that I can test my training job without incurring the costs of running a Kubernetes cluster.
38
41
39
42
#### Story 2
43
+
40
44
As an AI Practitioner, I want to initialize datasets and models within Podman/Docker containers, so that I can streamline my local training environment.
41
45
42
46
### Notes/Constraints/Caveats
47
+
43
48
- Local execution mode will first support Subprocess, with future plans to explore Podman, Docker, and Apple Container.
44
49
- The subprocess implementation will be restricted to single node.
45
50
- The local execution mode will support only pytorch runtime initially.
46
51
- Resource limitations on memory, cpu and gpu is not fully supported locally and might not be supported if the execution backend doesn't expose apis to support it.
47
52
48
53
### Risks and Mitigations
54
+
49
55
-**Risk**: Compatibility issues with non-Docker container runtimes.
50
56
-**Mitigation**: Initially restrict support to Podman/Docker and evaluate alternatives for future phases.
51
57
-**Risk**: Potential conflicts between local and Kubernetes execution modes.
@@ -55,7 +61,7 @@ As an AI Practitioner, I want to initialize datasets and models within Podman/Do
55
61
56
62
The local execution mode will be implemented using a new `LocalProcessBackend`, `PodmanBackend`, `DockerBackend` which will allow users to execute training jobs using containers and virtual environment isolation. The client will utilize container runtime capabilities to create isolated environments, including volumes and networks, to manage the training lifecycle. It will also allow for easy dataset and model initialization.
57
63
58
-
- Different execution backends will need to implement the same interface from the `ExecutionBackend` abstract class so `TrainerClient` can initialize and load the backend.
64
+
- Different execution backends will need to implement the same interface from the `RuntimeBackend` abstract class so `TrainerClient` can initialize and load the backend.
59
65
- The Podman/Docker client will connect to a local container environment, create shared volumes, and initialize datasets and models as needed.
60
66
- The **DockerBackend** will manage Docker containers, networks, and volumes using runtime definitions specified by the user.
61
67
- The **PodmanBackend** will manage Podman containers, networks, and volumes using runtime definitions specified by the user.
@@ -70,16 +76,20 @@ The local execution mode will be implemented using a new `LocalProcessBackend`,
70
76
-**E2E Tests**: Conduct end-to-end tests to validate the local execution mode, ensuring that jobs can be initialized, executed, and tracked correctly within Podman/Docker containers.
71
77
72
78
### Graduation Criteria
79
+
73
80
- The feature will move to the `beta` stage once it supports multi-node training with pytorch framework as default runtime and works seamlessly with local environments.
74
81
- Full support for multi-worker configurations and additional ML frameworks will be considered for the `stable` release.
75
82
76
83
## Implementation History
84
+
77
85
-**KEP Creation**: April 2025
78
86
-**Implementation Start**: April 2025
87
+
79
88
## Drawbacks
80
89
81
90
- The initial implementation will be limited to single-worker training jobs, which may restrict users who need multi-node support.
82
91
- The local execution mode will initially only support Subprocess and may require additional configurations for Podman/Docker container runtimes in the future.
83
92
84
93
## Alternatives
94
+
85
95
-**Full Kubernetes Execution**: Enable users to always run jobs on Kubernetes clusters, though this comes with higher costs and longer development cycles for ML engineers.
0 commit comments