Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increased in time RAM utilization #187

Open
vvitad opened this issue Jan 31, 2024 · 3 comments
Open

Increased in time RAM utilization #187

vvitad opened this issue Jan 31, 2024 · 3 comments
Labels

Comments

@vvitad
Copy link

vvitad commented Jan 31, 2024

In time RAM utilization has been increasing

Exporter has shown huge RAM utilization up to 2GB that is progressing in time. Starting with 50MB it has grown to 1GB in 10 days. Other Postgres instance showed 50MB to 2GB in 7 days. Others has shown smaller growth: 30MB to 100MB in 10 days.
After restarting the service, which is used to start exporter, memory 'returns', but than utilization starts to increase once again.

Installation details

  • operating system: [CentOS 7]
  • query-exporter installation type:
  • pip:
  - Package                Version
---------------------- ------------
aiohttp                3.7.4.post0
argcomplete            3.1.2
async-timeout          3.0.1
attrs                  22.2.0
chardet                4.0.0
croniter               2.0.1
idna                   3.6
idna-ssl               1.1.0
importlib-metadata     4.8.3
jsonschema             3.2.0
multidict              5.2.0
outcome                1.1.0
pip                    21.3.1
prometheus-aioexporter 1.6.3
prometheus-client      0.17.1
psycopg2-binary        2.8.6
pyrsistent             0.18.0
python-dateutil        2.8.2
pytz                   2023.3.post1
PyYAML                 6.0.1
query-exporter         2.7.0
Represent              1.6.0.post0
setuptools             59.6.0
six                    1.16.0
SQLAlchemy             1.3.24
sqlalchemy-aio         0.16.0
toolrack               3.0.1
typing_extensions      4.1.1
wheel                  0.37.1
yarl                   1.7.2
zipp                   3.6.0
  • docker image: [no docker]
  • snap: [no snap`]

To Reproduce

Such a huge increase is reproducing only on some instances, but the only difference between them - is the number of metrics retrieved(depends on the amount of queries and tables in database). I can't see how it can be the reason to not letting the memory go.

  1. Config file content (redacted of secrets if needed)
databases:
  dbname:
    dsn: env:PG_DATABASE_DSN_dbname

metrics:
  pg_table_seq_scan:
    type: counter
    description: Number of sequential scans initiated on the table
    labels: [datname, schemaname, relname, parent_relname]
    ....

queries:
  table_stats:
    interval: 1h
    databases: [dbname]
    metrics:
      - pg_table_seq_scan
      ...
    sql: >
      select
          current_database() as datname,
      ... limit 200
  idx_stats:
    interval: 1h
    databases: [dbname]
    metrics:
      - pg_idx_scan
      ... 
    sql: >
      with q_locked_rels as (
          select relation from pg_locks where mode = 'AccessExclusiveLock'
      ... limit 200
  query_stats:
    interval: 1m
    databases: [dbname ]
    metrics:
      - pg_statements_calls
      ...
    sql: >
      with q_data as (
      select
      ... limit 200(query_exporter)
  1. Ran query-exporter with the following command line ...
/usr/local/query_exporter/bin/query-exporter /etc/query_exporter/config.yml --host 0.0.0.0 --port 9560

PG_DATABASE_DSN_dbname=postgresql://<exporter_user>:<password>@<host>.ru:<pg_port>/dbname?target_session_attrs=read-write&application_name=query_exporter

Right now I'm trying to use keep-connected: false, but the results will take a couple of days at least. I have no understanding in why it keeps doing it and not just return memory back after doing a query.
Also there is a thought that it could be Postgres specified behaviour. I would be grateful if you can share your knowledge.

You can clearly see when restart of the exporter has been made.
Снимок экрана 2024-01-31 в 08 56 31

@vvitad vvitad added the bug label Jan 31, 2024
@vvitad
Copy link
Author

vvitad commented Mar 5, 2024

expiration: 1m also isn't helping

@GSX1400
Copy link

GSX1400 commented May 15, 2024

I am also experiencing this bug.

OS: Amazon Linux 2
Docker image: 2.10.0

I have attempted it with the following:

OS: Ubuntu 24.04
Snap: 2.10.0

I have also tested version 2.9.0 with the same issue.

Same configuration for both to a Postgres RDS server.

If query exporter is left running, it will eventually consume all system memory, leading to a freeze.

@vvitad
Copy link
Author

vvitad commented May 15, 2024

@GSX1400 we couldn't resolve this problem and chose another exporter burningalchemist/sql_exporter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants