Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster settings collection fails when cluster.max_shards_per_node is set #509

Closed
akazs opened this issue Dec 7, 2021 · 2 comments · Fixed by #603
Closed

Cluster settings collection fails when cluster.max_shards_per_node is set #509

akazs opened this issue Dec 7, 2021 · 2 comments · Fixed by #603

Comments

@akazs
Copy link
Contributor

akazs commented Dec 7, 2021

Hi,

we are experiencing an issue on our 7.14 clusters that after setting cluster.max_shards_per_node manually the collector failed to unmarshal it.

cluster_settings.go:160 msg="failed to fetch and decode cluster settings stats" err="json: cannot unmarshal object into Go struct field Cluster.defaults.cluster.max_shards_per_node of type string"

The version we are using is 1.2.1, yet the code responsible for this part didn't seem to be changed between 1.2.1 and 1.3.0.

After some investigation we found that

  • Without cluster.max_shards_per_node being set manually, the response json from /_cluster/settings?include_defaults would be like
    {
      "defaults": {
        "cluster": {
          "max_shards_per_node.frozen": "3000",
          "max_shards_per_node": "1000"
      }
    }
  • With cluster.max_shards_per_node being set manually as a persistent setting, the response json would become
    {
      "persistent": {
        "cluster": {
          "max_shards_per_node": "2000"
        }
      },
      "defaults": {
        "cluster": {
          "max_shards_per_node": {
            "frozen": "3000"
          }
        }
      }
    }

This seems to be the reason for this error.
Any ideas?

@kbiernat
Copy link

kbiernat commented Jan 3, 2022

+1
It would be great to be able to monitor open vs max shards.

@miklezzzz
Copy link

Rectified the issue by putting _cluster/settings -d '{"persistent":{"cluster.max_shards_per_node.frozen": "3000"}}' (or the value you desire as we don't use frozen nodes). After that the strange setting "max_shards_per_node": {"frozen": "3000"} has disappeared from the default section (has gone to the persistent section) and the merge library is able to do its work properly.
P.s. hasn't found a proper solution yet, quite strange to observe such a weird behavior from elastic search API, though :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants