Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mongo error because node found twice #85

Open
DatzAtWork opened this issue Jul 4, 2018 · 4 comments
Open

Mongo error because node found twice #85

DatzAtWork opened this issue Jul 4, 2018 · 4 comments

Comments

@DatzAtWork
Copy link

jira-mod-0 is found twice, once as jira-mongo-0:27017 and once as jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 . This causes a mongo error.

Here the logs of the sidecar on jira-mongo-0:
In the beginning, when only one pod is found:

$ kubectl.exe logs jira-mongo-0 mongo-sidecar -f

: [email protected] start /opt/cvallance/mongo-k8s-sidecar
: forever src/index.js

warn:    --minUptime not set. Defaulting to: 1000ms
warn:    --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
Using mongo port: 27017
Starting up mongo-k8s-sidecar
The cluster domain 'kubernetes.local' was successfully verified.
Addresses to add:     [ 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1,
       host: 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530703849 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530703885 },
     signature: { hash: [Binary], keyId: 0 } } }

Later, when second pod is found, it says:

Addresses to add:     [ 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017',
  'jira-mongo-1.jira-mongo.fapoms-training.svc.kubernetes.local:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1,
       host: 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' },
     { _id: 2,
       host: 'jira-mongo-1.jira-mongo.fapoms-training.svc.kubernetes.local:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530704647 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530704647 },
     signature: { hash: [Binary], keyId: 0 } } }
@DatzAtWork
Copy link
Author

Same issue, when KUBERNETES_MONGO_SERVICE_NAME is not set:

Addresses to add:     [ '10.6.26.135:27017', '10.6.26.136:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1, host: '10.6.26.135:27017' },
     { _id: 2, host: '10.6.26.136:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1531122214 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1531122214 },
     signature: { hash: [Binary], keyId: 0 } } }

@neeraj9194
Copy link

neeraj9194 commented Oct 29, 2018

same with me, Is there any solution for this.
I think the problem is here,

//We need to hack in the fix where the host is set to the hostname which isn't reachable from other hosts

@neeraj9194
Copy link

Turns out it was my mistake all along, I was using PersistentvolumeClaimTemplate and Persistentvolume which were not getting deleted whenever Statefulset gets replaced and was picking replicaset "rs0" from earlier version with wrong hostname. So, If you don't see initial setup logs in logs like below it might be picking old configuration.

The cluster domain 'cluster.local' was successfully verified.
Pod has been elected for replica set initialization
initReplSet 10.16.0.109:27017
initial rsConfig is { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
......

@Skeen
Copy link

Skeen commented Dec 1, 2018

I can reproduce the issue on a new minikube installation.
I have to add a clusterolebinding to avoid permission issues in the sidecars;

kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --servicaccount=default:default

Alternatively: #86 would fix this
However after adding this, and following the instructions I get 3 mongo pods with sidecar, and the sidecars all throw errors alike:

        host: 'mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1, host: '172.17.0.12:27017' },
     { _id: 2, host: '172.17.0.13:27017' },
     { _id: 3, host: '172.17.0.14:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: 60000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5c02af9574c38afd25d6604f } }
Error in workloop { MongoError: The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:182:13)
    at addChunk (_stream_readable.js:287:12)
    at readableAddChunk (_stream_readable.js:268:11)
    at Socket.Readable.push (_stream_readable.js:223:10)
    at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:122:17)
  name: 'MongoError',
  message:
   'The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg:
   'The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible' } 

The issue seems to arise as mongo-0 does not identify itself by IP when found via the kubernetes client, and is described by the README:

... make sure that:

    the names of the mongo nodes are their IPs
    the names of the mongo nodes are their stable network IDs (for more info see the link above)

In the above this is not the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants