Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart #5512

Merged
merged 49 commits into from
Oct 1, 2024
Merged

Helm chart #5512

merged 49 commits into from
Oct 1, 2024

Conversation

bikash119
Copy link
Contributor

@bikash119 bikash119 commented Sep 18, 2024

Description

  • Use helm chart to manage k8s resources
  • Added pre-commit hook to use helm lint to find and fix linting errors in chart templates.

Closes #<issue_number>

Type of change

  • Improvement (change adding some improvement to an existing functionality)
  • Documentation update

How Has This Been Tested

on dev machine

  • install minikube
  • install helm
  • helm install <chart_folder_location> . In this case helm install argilla-dev examples/deployment/k8s/argilla-cart
    This should install and start elasticsearch and argilla-server to the k8s cluster
  • kubectl port-forward service/argilla-dev-argilla-chart 6900:6900.
  • http://localhost:6900 ->should open the Argilla UI login screen.
  • USERNAME: argilla , PASSWORD: 12345678, will authenticate the user and take user to http://localhost:6900/datasets.

Checklist

  • I added relevant documentation [TODO]
  • I followed the style guidelines of this project
  • I did a self-review of my code
  • I made corresponding changes to the documentation [TODO]
  • I confirm My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • I have added relevant notes to the CHANGELOG.md file (See https://keepachangelog.com/)[ TODO ]

@bikash119
Copy link
Contributor Author

bikash119 commented Sep 18, 2024

@davidberenstein1957 : Should we remove the k8s resources files inside examples/deployment/k8s folder once we have helm charts for Elasticsearch and Argilla? We will have hard time keeping the k8s resource yamls and chart templates in sync if we decide to keep them both. Please share your feedback.
cc: @gabrielmbmb

@davidberenstein1957
Copy link
Member

davidberenstein1957 commented Sep 20, 2024

@frascuchon

- Removed k8s resource yamls
- Elasticsearch "built in" is deployed as operator.
- Redis, Elasticsearch, Argilla user PV to persist data beyond restarts.
- Argilla owner username and password is passed from values.yaml.
- Align HPA config with “conventions” – I.e. enable via flag and have static replicaCount variable if not activated.
- Allow for injecting some configuration via configmaps / secrets etc. If external ES is used, the entries to /etc/hosts are pushed using cm.
- Ingress class is made configurable and can be set in values.yaml.
- TODO: Add a readme.md for developers to use it.
- TODO: Add unit tests to check persistence storage usage.
@bikash119
Copy link
Contributor Author

bikash119 commented Sep 21, 2024

hi @davidberenstein1957 , @frascuchon. Request you to share your feedback on this work as I continue to work on creating readme for this change. I have been able to resolve all the review comments for the original PR 4065.
In a nutshell:

  • Argilla can be installed using helm chart
  • Redis is added as dependency chart.
  • ES is installed using Elastic Operator
  • ES installation is optional based on value in values.yaml.
  • All of the services use PV to persist data.
  • Default Owner user can be set from values.yaml.
  • modified .pre-commit-config.yml to use helm lint for k8s resource yamls.

@bikash119 bikash119 marked this pull request as ready for review September 21, 2024 03:58
Copy link
Member

@frascuchon frascuchon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @bikash119 for this amazing PR.

I let some comments/questions. I will try the installation later.

examples/deployments/k8s/argilla-chart/README.md Outdated Show resolved Hide resolved
argilla-server/CHANGELOG.md Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/README.md Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/README.md Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/README.md Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/templates/NOTES.txt Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/templates/NOTES.txt Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/templates/NOTES.txt Outdated Show resolved Hide resolved
examples/deployments/k8s/argilla-chart/values.yaml Outdated Show resolved Hide resolved
bikash119 and others added 10 commits September 25, 2024 03:21
Accept change suggested by @frascuchon

Co-authored-by: Paco Aranda <[email protected]>
Accept change suggested by @frascuchon

Co-authored-by: Paco Aranda <[email protected]>
Accept change suggested by @frascuchon

Co-authored-by: Paco Aranda <[email protected]>
Accept change suggested by @frascuchon

Co-authored-by: Paco Aranda <[email protected]>
Accept change suggested by @frascuchon

Co-authored-by: Paco Aranda <[email protected]>
 - Added steps to get started in minikube cluster for local development
- Updated README.md to include few missing steps
@jfcalvo
Copy link
Member

jfcalvo commented Sep 26, 2024

@bikash119 additionally to the addition of Redis as dependency we need a change to run the Argilla server workers to process the background jobs enqueues into Redis.

You can take a look to https://github.com/argilla-io/argilla/blob/develop/examples/deployments/docker/docker-compose.yaml#L36-L47 to check how we are running the workers in the Argilla Docker compose example, but the idea is that the instance running the workers should have access to the same config that the server is using and then execute the following:

python -m argilla_server worker

Thanks for the PR btw ❤️

- added a resource yaml for argilla-worker
- fixed the ports used for readiness and liveliness probes
- Added labels and selector-labels for worker
…e successfully before running integration test
@bikash119
Copy link
Contributor Author

@jfcalvo Thank you for the pointer. I have included it as a different resource to the helm chart.
Additionally, I have updated the README.md to test the deployment in a clean minikube cluster. Was able to execute the integration test using pytest tests/integration
cc: @davidberenstein1957 , @frascuchon

@frascuchon frascuchon self-requested a review September 30, 2024 09:03
Copy link
Member

@frascuchon frascuchon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution @bikash119!!!

@frascuchon frascuchon added this to the v2.3.0 milestone Sep 30, 2024
@frascuchon frascuchon merged commit 022b414 into argilla-io:develop Oct 1, 2024
1 of 2 checks passed
Copy link

codecov bot commented Oct 1, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.21%. Comparing base (12f5109) to head (e41d372).
Report is 1 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #5512      +/-   ##
===========================================
- Coverage    91.23%   91.21%   -0.02%     
===========================================
  Files          145      145              
  Lines         5964     5964              
===========================================
- Hits          5441     5440       -1     
- Misses         523      524       +1     
Flag Coverage Δ
argilla-server 91.21% <ø> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants