Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Observe and Balance #13

Open
mcdonnelldean opened this issue Apr 10, 2016 · 5 comments
Open

Observe and Balance #13

mcdonnelldean opened this issue Apr 10, 2016 · 5 comments

Comments

@mcdonnelldean
Copy link
Contributor

@rjrodger wanted to get your opinion on this.

(The context is NodeZoo)

As you know, if we scale github / travis / npm we are going to get multiple calls back to info. This is accounted for (it will just do an upsert multiple times) and will be further mitigated since we added a cached timestamp. (If the timestamps match or are less we just drop the data).

This still leaves multiple messages over the network. My ratio of messages goes out of whack. Becoming 6 to 1 in the case of a each of the above scaled once.

On the one hand this is fine, it's redundant and means if one service fails the other will kick in. But at the same time it's also adding pressure to info to handle calls it doesn't really want or need.

I think a nice solution to this would be balance and observe. This would be all the benefits of observe with the bonus that it will balance when the services are the same.

The hard part is trying to decide what makes them the same. The simplest solution I feel is seneca.tag. Since scaled services have the same tag we could use it as a way to balance without losing observe.

You don't want this functionality all the time, for instance I want ALL scaled copies of info to get all messages (role:info,res:part). From my perspective I think this would round out the balance client / mesh stuff nicely.

Thoughts?

CC @AdrianRossouw and @mcollina since ye both have skin in the game. Please bear in mind it all works as is. This is more of an optimisation. The current functionality does not break scaled services, it just adds additional load in places and that additional load does act as redundancy.

var envs = process.env
var opts = {
  seneca: {
    tag: envs.GITHUB_TAG || 'nodezoo-github'
  },
  mesh: {
    auto: true,
    listen: [
      {pin: 'role:github,cmd:get', model: 'consume'},
      {pin: 'role:info,req:part', model: 'observe'} // observe using tags? new model?
    ]
  }
}

I'm not sure how this should look on the input, but I think what makes sense here is a new model that observes unless they are the same service and then round robin(s) within the same tag. It's basically a combination of both functionalities.

@mcdonnelldean
Copy link
Contributor Author

@dgonzalez I'd like to get your opinions on the above too. Basically scaling in such a way that doesn't break observe.

@dgonzalez
Copy link
Member

dgonzalez commented Apr 26, 2016

I think (I am still not fully proficient in Seneca and seneca-balance-client) with Visigoth (the circuit breaker I am working on) we could write a node rater smart enough to avoid sending duplicated requests (aka not sending messages to duplicated targets).

function api_choose_all(callback) {
    _(me.upstreams$).forEach(function(upstream, index) {
        if (upstream.meta$.status != "CLOSED") {
            callback(upstream, index);
        }
    });
}

This is current code in Visigoth that I introduced just for the shake of the observer patter in seneca-balance-client. We could have something like:

api_choose_first(...) {
...
}

That will send the requests till one of them replies successfully which, as far as I can understand from the code, is what observe does. In that way, we avoid the unnecessary messages being issued (as you said, redundancy FTW but I can see problems if we want the upstream to be hit exactly once at max).

does it makes sense?

I think on that way we can scale a service safely without the noise-alike messages going out of control.

I probably should give you more context about Visigoth. Let me add something to the calendar for next week.

@wzrdtales
Copy link

@dgonzalez How is your working going on Visigoth? Would be quite interested to see what you've done though :)

@dgonzalez
Copy link
Member

@wzrdtales it is done now... it will be released soon. Send me an email at [email protected] and I can give you more details and a pre-release. Thanks for asking though.

@wzrdtales
Copy link

@dgonzalez Cool thing and thanks for the fast reply, just mailed you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants