Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Axon node memory consumption #1289

Closed
distributedstatemachine opened this issue Aug 1, 2023 · 6 comments · Fixed by #1531
Closed

Axon node memory consumption #1289

distributedstatemachine opened this issue Aug 1, 2023 · 6 comments · Fixed by #1531
Assignees
Labels
t:help Extra attention is needed

Comments

@distributedstatemachine
Copy link
Collaborator

Contact Details

No response

What happened

Description

We run our Axon nodes an EKS cluster in a stateful set with 4 nodes.

kubectl get statefulsets
NAME    READY   AGE
axon1   1/1     67d
axon2   1/1     67d
axon3   1/1     67d
axon4   1/1     67d

unfortunately , axon1 seems to consume over twice the memory of the other pods

kubectl top pods
NAME      CPU(cores)   MEMORY(bytes)   
axon1-0   13m          4719Mi          
axon2-0   14m          2041Mi          
axon3-0   12m          2700Mi          
axon4-0   13m          2224Mi  

All pods are running the same config , and receiving about the same amount network data from the ingress controller (so they are expected to have the same load)

I would appreciate help in understanding this discrepancy

@distributedstatemachine distributedstatemachine added the t:help Extra attention is needed label Aug 1, 2023
@github-actions
Copy link

github-actions bot commented Aug 1, 2023

Hello @driftluo. Please explain this question.

@driftluo
Copy link
Contributor

driftluo commented Aug 1, 2023

Is there any function for node 1 to use alone? For example, only perform data pressure testing on node1

@distributedstatemachine
Copy link
Collaborator Author

This isnt currently possible , as we route all traffic through the ingress. The ingress has a backend, axon-chain , which distributes traffic round-robin to the different backends

kubectl describe ingress khalani-testnet
Name:             khalani-testnet
Labels:           <none>
Namespace:        axon
Address:          k8s-ingressn-ingressn-40f737c3d3-d15bcc70ab323dc6.elb.us-east-1.amazonaws.com
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  testnet.khalani.network  
                           /   axon-chain:8000 (10.4.137.166:8000,10.4.152.35:8000,10.4.80.127:8000 + 1 more...)
Annotations:               kubernetes.io/ingress.class: nginx
                           nginx.ingress.kubernetes.io/cors-allow-headers:
                             DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access...
                           nginx.ingress.kubernetes.io/cors-allow-methods: POST, GET, OPTIONS
                           nginx.ingress.kubernetes.io/cors-allow-origin: *
                           nginx.ingress.kubernetes.io/enable-cors: true
Events:                    <none>

@yangby-cryptape
Copy link
Collaborator

Hi, do you known how it becomes that.

I mean:

  • The memory was grown up slowly, day after day, and finally it becomes such big.
  • Or, it suddenly becomes such big.

@distributedstatemachine
Copy link
Collaborator Author

@yangby-cryptape I am unsure of how it gets to this , but it appears to be the nodes "steady state"

@Flouse Flouse linked a pull request Nov 7, 2023 that will close this issue
6 tasks
@Flouse Flouse assigned Simon-Tl and unassigned driftluo Nov 7, 2023
@Flouse
Copy link
Contributor

Flouse commented Nov 15, 2023

Memory Usage Test, see #1531

@Flouse Flouse closed this as completed Nov 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
t:help Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants