Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Configuration to file documentation to achieve performance benchmark. #1552

Open
hmasum52 opened this issue Sep 8, 2024 · 2 comments

Comments

@hmasum52
Copy link

hmasum52 commented Sep 8, 2024

Permify documentation says that Permify can achieve response times of up to 10ms for access control checks, with handling up to 1 million access requests per second.

What is the configuration to achieve this?

I found a benchmark in the website here. The benchmarking doesn't tell the config.yaml file used to achieve the result. A documentation of the configuration or example repository will be very helpful to achieve the performance that is mentioned in the documentation.

@EgeAytin
Copy link
Member

EgeAytin commented Sep 9, 2024

Hi @hmasum52, thanks for asking! We do offer that infrastructure in both our cloud and on-prem offerings, and we are constantly improving and working on it. If you'd like to learn more, we're always open for a call. For the OSS version, we can help you achieve better results if you could send your tech stack and infrastructure details in our Discord community or here as well.

@tolgaOzen tolgaOzen moved this to Q3 2024 – Jul-Sep in Public Roadmap Sep 13, 2024
@polarathene
Copy link

For the OSS version, we can help you achieve better results if you could send your tech stack and infrastructure details in our Discord community or here as well.

I think all that is being asked is information for reproduction with config and environment used to produce that claim. Understandably it will vary by deployment environment, but documenting more context for a reproduction to verify the claims is ideal.

Alternatively, an equivalent of say a compose.yaml with versions of Docker used and a service provider (AWS, GCP, Azure, Digital Ocean, Vultr, etc) with context on deployment there is probably sufficient. Even if it's not achieving the latency + load cited, that could at least be adjusted for this demonstration example?

  • Your cloud and on-prem offerings may be more optimal towards achieving the metrics cited, that's fine. AFAIK all @hmasum52 is seeking is more context for what they can expect with the OSS version.
  • Providing a minimal compose.yaml example that performs well at a service provider(s) with a different instances would better illustrate performance and how that compares to the cloud/on-prem offerings?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Q3 2024 – Jul-Sep
Development

No branches or pull requests

3 participants