Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting a negative number for expiration #11

Closed
thoop opened this issue Sep 3, 2015 · 18 comments
Closed

Getting a negative number for expiration #11

thoop opened this issue Sep 3, 2015 · 18 comments

Comments

@thoop
Copy link

thoop commented Sep 3, 2015

I'm getting a negative number for expiration causing my redis client to come back with this error:

[ [Error: ERR invalid expire time in SETEX] ]

Line 105 uses this:

ex * 1000 - Date.now()

where my ex is 1441268386 and Date.now() was 1441268386507 meaning it was -507ms over expiration.

The problem here is that error is in a weird double array so the error check doesn't catch it and my responses to limit.get(function(err, limit) { ... }); always show n remaining so it effectively doesn't rate limit at all.

I assume other people would have run into this. I was using a duration of 1 second and I'm hitting redis pretty hard concurrently.

Is this line stripping off the milliseconds and just making it accurate to the second?

var ex = (Date.now() + duration) / 1000 | 0;
@noamshemesh
Copy link
Collaborator

Thanks @thoop
the last line that you mention strips the millis as you say.
do you call the rate limiter with duration: 1000 or with duration: 1?

@thoop
Copy link
Author

thoop commented Sep 3, 2015

duration: 1000 is the lowest I call it with and it can be up to 60000. I'm testing some changes to keep the milliseconds on a fork that looks to be a little better for how hard I'm hitting it. I'll submit a pull request once I'm sure it's working well and stable if you're interested. Not sure what you want to do about that error message being wrapped in two arrays or the best way to even handle that error.

Thanks for the quick response!

@noamshemesh
Copy link
Collaborator

I tried to write a test that reproduce this issue, and it passed.

  var limit = new Limiter({
      duration: 1000,
      max: 2,
      id: 'something',
      db: db
    });

    limit.get(function(err, res) {
      res.remaining.should.equal(2);
    });

    limit.get(function(err, res) {
      res.remaining.should.equal(1);
      done();
    });

any thoughts?

@thoop
Copy link
Author

thoop commented Sep 3, 2015

Let me see if I can reproduce it. Are you actually using redis in your test or do you have it mocked out?

@noamshemesh
Copy link
Collaborator

actual redis. the tests are in test directory, you can run them by executing mocha test/

@thoop
Copy link
Author

thoop commented Sep 8, 2015

Sorry I haven't gotten to this yet. I've been swamped lately. I did run those tests and everything looked good so I want to look into this more once I have a chance!

@thoop
Copy link
Author

thoop commented Sep 16, 2015

I modified the test and got it to fail reliably around the edge case of the seconds/milliseconds.

In the last test of test/index.js, change it to be this:

// Warm up and prepare the data.
      while( ((Date.now() + 1000) / 1000 | 0) * 1000 - Date.now() > 10 ) {}

      limits[0].get(function(err, res) {
        if (err) {
          done(err);
        } else {
          res.remaining.should.equal(left--);

          // Simulate multiple concurrent requests.
          limits.forEach(function(limit) {
            limit.get(complete);
          });
        }
      });

You can see the while loop I added. It basically causes the test to wait until it's going to be on the cusp of a rounding issue. It looks like the rounding is causing redis to cleanup the counts before the millisecond resolution. We're seeing issues with this when running on AWS with Elasticache because we get millisecond response times.

By removing the rounding, it looks to fix that issue...but it breaks a few other tests because now it's giving milliseconds back from the calls and not seconds. Thoughts?

@noamshemesh
Copy link
Collaborator

Hi thoop,
I won't be near a computer until next month. I will take a look then.
Thanks though!

On Wed, Sep 16, 2015, 10:06 Todd Hooper [email protected] wrote:

I modified the test and got it to fail reliably around the edge case of
the seconds/milliseconds.

In the last test of test/index.js, change it to be this:

// Warm up and prepare the data.
while( ((Date.now() + 1000) / 1000 | 0) * 1000 - Date.now() > 10 ) {}

  limits[0].get(function(err, res) {
    if (err) {
      done(err);
    } else {
      res.remaining.should.equal(left--);

      // Simulate multiple concurrent requests.
      limits.forEach(function(limit) {
        limit.get(complete);
      });
    }
  });

You can see the while loop I added. It basically causes the test to wait
until it's going to be on the cusp of a rounding issue. It looks like the
rounding is causing redis to cleanup the counts before the millisecond
resolution. We're seeing issues with this when running on AWS with
Elasticache because we get millisecond response times.

By removing the rounding, it looks to fix that issue...but it breaks a few
other tests because now it's giving milliseconds back from the calls and
not seconds. Thoughts?


Reply to this email directly or view it on GitHub
#11 (comment).

@noamshemesh
Copy link
Collaborator

Sorry for the delay.
I think I found an easy way to reproduce it, take a look at 499e3fc. please checkout this commit and tell me if it reproduce your bug or something else (last test fails without the while addition).

I will add a constant test with fakeredis right after we'll merge #12.

@thoop
Copy link
Author

thoop commented Oct 12, 2015

I'll look into this soon. I don't think just changing redis to fakeredis would make it fail for the same reason unless fakeredis handles expiration differently.

@noamshemesh
Copy link
Collaborator

Can you check please if #13 solves your issue?

@thoop
Copy link
Author

thoop commented Oct 21, 2015

Sure, thanks! I'll report back.

@noamshemesh
Copy link
Collaborator

closing due to inactivity

@brandonros
Copy link

brandonros commented Sep 20, 2019

I am able to recreate this with the latest version from npm (3.3.1).

{ max: 4, duration: 1000 }

const Limiter = require('ratelimiter')

const rateLimiters = {}

const delay = (ms) => new Promise((resolve, reject) => setTimeout(resolve, ms))

const rateLimit = async (bucketName, clientId) => {
  const bucketDefinition = bucketDefinitions[bucketName]
  if (!bucketDefinition) {
    throw new Error(`Bucket definition not found: ${bucketName}`)
  }
  const key = `${bucketName}:${clientId}`
  if (!rateLimiters[key]) {
    rateLimiters[key] = new Limiter({
      db: cacheModule.getConnection(), // redis, not ioredis or mock-redis
      id: key,
      max: bucketDefinition.max,
      duration: bucketDefinition.duration
    })
  }
  const limit = rateLimiters[key]
  const delta = await new Promise((resolve, reject) => {
    limit.get((err, rate) => {
      if (err) {
        return reject(err)
      }
      if (rate.remaining) {
        return resolve(0)
      }
      let delta = (rate.reset * 1000) - Date.now() | 0
      // watch out for weird negative delta problem
      if (delta <= 0) {
        delta = Math.ceil(bucketDefinition.duration / bucketDefinition.max)
      }
      logger.log({
        level: 'debug',
        message: 'Rate limited',
        key,
        rate,
        delta
      })
      return resolve(delta)
    })
  })
  if (delta) {
    await delay(delta)
    return rateLimit(bucketName, clientId)
  }
}

@noamshemesh
Copy link
Collaborator

just to make sure I understand - the reset is set to the floor value of the second even though Date.now() is already bigger?
For example for epoch 1441268386507 the reset is 1441268386000 already when setting it?
Just to make sure, is it reproducible with duration 2000? what about 1500?

@brandonros
Copy link

{
  "level": "debug",
  "message": "Rate limited",
  "key": "foo:bar",
  "rate": {
    "remaining": 0,
    "reset": 1568990156,
    "resetMs": 1568990156474,
    "total": 20
  },
  "delta": -188,
  "file": "/home/brandon/foo/rateLimit.js",
  "line": "39",
  "timestamp": "2019-09-20T14:35:56.188Z"
}

does that help you, or do you need me to log Date.now() too? I changed the bucket definition to { max: 20, duration: 5000 } and it still happens.

@noamshemesh
Copy link
Collaborator

can you use the resetMs instead?

@brandonros
Copy link

https://github.com/tj/node-ratelimiter#example

image

That code is from your example.

I switched to a different package, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants