Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD stuck in waiting for completion of hook batch/Job/argocd-redis-secret-init #2887

Open
oponomarov-tu opened this issue Aug 19, 2024 · 4 comments
Labels
argo-cd bug Something isn't working

Comments

@oponomarov-tu
Copy link

Describe the bug

Ref: #2861

I still face the same issues even on Helm charts >= v7.4.1. My ArgoCD deployed on Fargate is managing itself after bootstrap. Tested in clean EKS (v1.30). Attaching values.yaml for reference.

Details

controller:
  replicas: 1
  env:
    - name: ARGOCD_K8S_CLIENT_QPS
      value: "300"
    - name: ARGOCD_SYNC_WAVE_DELAY
      value: "30"
  resources:
    limits:
      memory: 3Gi
    requests:
      cpu: 1
      memory: 3Gi
repoServer:
  autoscaling:
    enabled: true
    minReplicas: 1
  resources:
    requests:
      cpu: "100m"
      memory: "1Gi"
    limits:
      memory: "1Gi"
applicationSet:
  replicaCount: 1
dex:
  enabled: false
server:
  autoscaling:
    enabled: true
    minReplicas: 1
  resources:
    requests:
      cpu: "100m"
      memory: "256Mi"
    limits:
      memory: "512Mi"
redis:
  enabled: false
redis-ha:
  enabled: true
  haproxy:
    enabled: true
    init:
      resources:
        requests:
          memory: 100Mi
          cpu: 10m
        limits:
          memory: 100Mi
    resources:
      requests:
        memory: 200Mi
        cpu: 15m
      limits:
        memory: 200Mi
  redis:
    resources:
      requests:
        memory: 500Mi
        cpu: 20m
      limits:
        memory: 1Gi
  sentinel:
    resources:
      requests:
        memory: 20Mi
        cpu: 10m
      limits:
        memory: 20Mi
  exporter:
    resources:
      requests:
        memory: 25Mi
        cpu: 5m
      limits:
        memory: 25Mi
  init:
    resources:
      requests:
        memory: 25Mi
        cpu: 25m
      limits:
        memory: 25Mi
configs:
  secret:
    githubSecret: "$github-argocd-webhook:webhook.github.secret"
  repositories:
    aws-public-ecr:
      name: aws-public-ecr
      type: helm
      url: public.ecr.aws
      enableOCI: "true"

Perhaps the ttlSecondsAfterFinished should be configurable?

Related helm chart

argo-cd

Helm chart version

7.4.1

To Reproduce

  1. Install ArgoCD.
  2. Let ArgoCD manage itself.

Expected behavior

Clean sync.

Screenshots

N/A

Additional context

N/A

@oponomarov-tu oponomarov-tu added the bug Something isn't working label Aug 19, 2024
@yu-croco
Copy link
Collaborator

It's related argoproj/argo-cd#6880 ?

@oponomarov-tu
Copy link
Author

@yu-croco

It's related argoproj/argo-cd#6880 ?

Yes, and it was allegedly fixed in the following pull request. I was advised to reopen the ticket.

@speedythesnail
Copy link

speedythesnail commented Sep 1, 2024

I would like to also confirm that I have this issue, using Kind. I have not yet installed this in my OpenShift or Vanilla cluster, but was able to successfully install ArgoCD a month ago in OpenShift, using the operator.

I am trying to install via the Helm chart, with minimal modifications to the values file. In the past I've had no issues with my present values.

Screenshot from 2024-08-31 22-35-17

@svianac
Copy link

svianac commented Sep 13, 2024

Hello, I have got the same situation in an Argo of Argos architecture. The managed argos get stuck:
image
It tells Pending deletion. However, at cluster level I can see the Job gets created and finishes within a few seconds. But in Argo CD still shows as Running.

I am using Helm Version 7.4.4, which has already implemented the fix argoproj/argo-cd#6880. but apparently did not fixed for all cases..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
argo-cd bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants