-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BenchWeb/benchmarks add #1
base: BenchWeb/frameworks
Are you sure you want to change the base?
Conversation
Reviewer's Guide by SourceryThis PR adds a comprehensive benchmarking test framework implementation that includes test types for JSON, Plaintext, DB, Query, Cached Query, Update, and Fortune tests. The framework provides verification utilities, test runners, and load testing capabilities using wrk. Class diagram for the benchmarking frameworkclassDiagram
class Benchmarker {
+Benchmarker(config)
+run()
+stop(signal, frame)
}
class FrameworkTest {
+FrameworkTest(name, directory, benchmarker, runTests, args)
+start()
+is_accepting_requests()
+verify_urls()
}
class AbstractTestType {
+AbstractTestType(config, name, requires_db, accept_header, args)
+parse(test_keys)
+request_headers_and_body(url)
+output_headers_and_body()
+verify(base_url)
+get_url()
+get_script_name()
+get_script_variables(name, url, port)
}
class TestType {
+TestType(config)
+get_url()
+verify(base_url)
+get_script_name()
+get_script_variables(name, url)
}
AbstractTestType <|-- TestType
Benchmarker --> FrameworkTest
FrameworkTest --> AbstractTestType
AbstractTestType <|-- TestType
TestType <|-- JSONTestType
TestType <|-- PlaintextTestType
TestType <|-- DBTestType
TestType <|-- QueryTestType
TestType <|-- CachedQueryTestType
TestType <|-- UpdateTestType
TestType <|-- FortuneTestType
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @gitworkflows - I've reviewed your changes and they look great!
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟡 Security: 1 issue found
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟡 Documentation: 3 issues found
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
problems.append(('fail', 'Required response header missing: %s' % v, | ||
url)) | ||
|
||
if all(v.lower() not in headers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Validate Transfer-Encoding value when present
When Transfer-Encoding header is present, verify that its value is 'chunked'. Invalid Transfer-Encoding values could cause issues with response handling.
if all(v.lower() not in headers for v in ('Content-Length', 'Transfer-Encoding')):
problems.append((
elif 'transfer-encoding' in headers.lower() and headers['transfer-encoding'].lower() != 'chunked':
problems.append(('fail', 'Invalid Transfer-Encoding header value', url))
|
||
6. Fix this `README.md` and open a pull request | ||
|
||
Starting on line 49 is your actual `README.md` that will sit with your test implementation. Update all the dummy values to their correct values so that when people visit your test in our Github repository, they will be greated with information on how your test implementation works and where to look for useful source code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (documentation): Fix typo: 'greated' should be 'greeted'
# $DISPLAY_NAME Benchmarking Test | ||
|
||
### Test Type Implementation Source Code | ||
|
||
* [JSON](Relative/Path/To/Your/Source/File) | ||
* [PLAINTEXT](Relative/Path/To/Your/Source/File) | ||
* [DB](Relative/Path/To/Your/Source/File) | ||
* [QUERY](Relative/Path/To/Your/Source/File) | ||
* [CACHED QUERY](Relative/Path/To/Your/Source/File) | ||
* [UPDATE](Relative/Path/To/Your/Source/File) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (documentation): Consider marking placeholder values more explicitly
Values like $DISPLAY_NAME and the relative paths could be marked with or similar to make it clearer they need to be replaced.
# <DISPLAY_NAME> Benchmarking Test
### Test Type Implementation Source Code
* [JSON](<PATH_TO_SOURCE_FILE>)
* [PLAINTEXT](<PATH_TO_SOURCE_FILE>)
* [DB](<PATH_TO_SOURCE_FILE>)
* [QUERY](<PATH_TO_SOURCE_FILE>)
* [CACHED QUERY](<PATH_TO_SOURCE_FILE>)
* [UPDATE](<PATH_TO_SOURCE_FILE>)
* [FORTUNES](<PATH_TO_SOURCE_FILE>)
|
||
2. Edit `benchmark_config.json` | ||
|
||
You will need alter `benchmark_config.json` to have the appropriate end-points and port specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (documentation): Consider adding an example of benchmark_config.json modifications
A small example showing the required changes would make it clearer what needs to be modified.
You will need alter `benchmark_config.json` to have the appropriate end-points and port specified. | |
You will need to alter `benchmark_config.json` to have the appropriate end-points and port specified. For example: | |
{ | |
"endpoint": "http://localhost:8080", | |
"port": 8080, | |
"routes": ["/api/v1/endpoint"] | |
} |
return problems | ||
|
||
|
||
def verify_query_cases(self, cases, url, check_updates=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider extracting query parameter validation logic into a separate function to improve code organization.
The verify_query_cases()
function has become overly complex with deep nesting and mixed responsibilities. Consider extracting the query parameter validation:
def validate_query_param(q, max_infraction):
try:
queries = int(q)
if queries > MAX:
return MAX, None
elif queries < MIN:
return MIN, None
return queries, None
except ValueError:
return 1, (max_infraction,
'No response given for stringy `queries` parameter %s\n'
'Suggestion: modify your /queries route to handle this case' % q)
def verify_query_cases(self, cases, url, check_updates=False):
problems = []
world_db_before = {}
if check_updates:
world_db_before = databases[self.database.lower()].get_current_world_table(self.config)
for q, max_infraction in cases:
case_url = url + q
headers, body = self.request_headers_and_body(case_url)
expected_len, validation_error = validate_query_param(q, max_infraction)
if validation_error:
problems.append(validation_error)
continue
problems.extend(verify_randomnumber_list(expected_len, headers, body, case_url, max_infraction))
problems.extend(verify_headers(self.request_headers_and_body, headers, case_url))
if check_updates and int(q) >= MAX:
world_db_after = databases[self.database.lower()].get_current_world_table(self.config)
problems.extend(verify_updates(world_db_before, world_db_after, MAX, case_url))
return problems
This reduces nesting and separates concerns while maintaining functionality. The validation logic is now reusable and the main function flow is clearer.
url = "http://%s:%s%s" % (self.benchmarker.config.server_host, | ||
self.port, | ||
self.runTests[test_type].get_url()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace interpolated string formatting with f-string (replace-interpolation-with-fstring
)
url = "http://%s:%s%s" % (self.benchmarker.config.server_host, | |
self.port, | |
self.runTests[test_type].get_url()) | |
url = f"http://{self.benchmarker.config.server_host}:{self.port}{self.runTests[test_type].get_url()}" |
with open(os.path.join(verificationPath, 'verification.txt'), | ||
'w') as verification: | ||
test = self.runTests[test_type] | ||
log("VERIFYING %s" % test_type.upper(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace interpolated string formatting with f-string [×5] (
replace-interpolation-with-fstring
) - Invert any/all to simplify comparisons (
invert-any-all
) - Use f-string instead of string concatenation (
use-fstring-for-concatenation
)
|
||
# json_url should be at least "/json" | ||
if len(self.json_url) < 5: | ||
problems.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace call to format with f-string (
use-fstring-for-formatting
) - Replace if statement with if expression (
assign-if-exp
)
'max_concurrency': | ||
max(self.config.concurrency_levels), | ||
'name': | ||
name, | ||
'duration': | ||
self.config.duration, | ||
'levels': | ||
" ".join( | ||
"{}".format(item) for item in self.config.concurrency_levels), | ||
'server_host': | ||
self.config.server_host, | ||
'url': | ||
url, | ||
'accept': | ||
"application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace call to format with f-string (use-fstring-for-formatting
)
'max_concurrency': | |
max(self.config.concurrency_levels), | |
'name': | |
name, | |
'duration': | |
self.config.duration, | |
'levels': | |
" ".join( | |
"{}".format(item) for item in self.config.concurrency_levels), | |
'server_host': | |
self.config.server_host, | |
'url': | |
url, | |
'accept': | |
"application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7" | |
'max_concurrency': max(self.config.concurrency_levels), | |
'name': name, | |
'duration': self.config.duration, | |
'levels': " ".join( | |
f"{item}" for item in self.config.concurrency_levels | |
), | |
'server_host': self.config.server_host, | |
'url': url, | |
'accept': "application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7", |
|
||
# plaintext_url should be at least "/plaintext" | ||
if len(self.plaintext_url) < 10: | ||
problems.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace call to format with f-string (
use-fstring-for-formatting
) - Replace if statement with if expression (
assign-if-exp
)
PR Code Suggestions ✨Explore these optional code suggestions:
|
Bumps the go_modules group with 1 update in the /frameworks/Go/goravel/src/fiber directory: [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt). Updates `github.com/golang-jwt/jwt/v4` from 4.5.0 to 4.5.1 - [Release notes](https://github.com/golang-jwt/jwt/releases) - [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md) - [Commits](golang-jwt/jwt@v4.5.0...v4.5.1) --- updated-dependencies: - dependency-name: github.com/golang-jwt/jwt/v4 dependency-type: indirect dependency-group: go_modules ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps the maven group with 1 update in the /frameworks/Java/ninja-standalone directory: [org.hibernate.validator:hibernate-validator](https://github.com/hibernate/hibernate-validator). Bumps the maven group with 1 update in the /frameworks/Java/restexpress directory: [com.fasterxml.jackson.core:jackson-databind](https://github.com/FasterXML/jackson). Updates `org.hibernate.validator:hibernate-validator` from 6.0.20.Final to 6.2.0.Final - [Changelog](https://github.com/hibernate/hibernate-validator/blob/6.2.0.Final/changelog.txt) - [Commits](hibernate/hibernate-validator@6.0.20.Final...6.2.0.Final) Updates `com.fasterxml.jackson.core:jackson-databind` from 2.12.6.1 to 2.12.7.1 - [Commits](https://github.com/FasterXML/jackson/commits) --- updated-dependencies: - dependency-name: org.hibernate.validator:hibernate-validator dependency-type: direct:production dependency-group: maven - dependency-name: com.fasterxml.jackson.core:jackson-databind dependency-type: direct:production dependency-group: maven ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: KhulnaSoft bot <[email protected]>
PR Type
enhancement, documentation
Description
wrk
.wrk
with concurrency, pipeline, and query configurations.Changes walkthrough 📝
2 files
wrk.dockerfile
Add Dockerfile for benchmarking setup with wrk
benchmarks/load-testing/wrk/wrk.dockerfile
wrk
.benchmark_config.json
Add benchmark configuration template
benchmarks/pre-benchmarks/benchmark_config.json
17 files
__init__.py
Initialize and load benchmark test types dynamically
benchmarks/init.py
__pycache__
folders.abstract_test_type.py
Define abstract base class for benchmark test types
benchmarks/abstract_test_type.py
benchmarker.py
Implement Benchmarker class for managing test execution
benchmarks/benchmarker.py
Benchmarker
class to manage benchmark tests.cached-query.py
Implement cached-query benchmark test type
benchmarks/cached-query/cached-query.py
TestType
for cached-query benchmarks.db.py
Implement database benchmark test type
benchmarks/db/db.py
TestType
for database benchmarks.fortune.py
Implement fortune benchmark test type
benchmarks/fortune/fortune.py
TestType
for fortune benchmarks.FortuneHTMLParser
.fortune_html_parser.py
Create HTML parser for fortune benchmark validation
benchmarks/fortune/fortune_html_parser.py
FortuneHTMLParser
for parsing and validating HTML.framework_test.py
Define FrameworkTest class for managing test execution
benchmarks/framework_test.py
FrameworkTest
class for managing individual tests.json.py
Implement JSON benchmark test type
benchmarks/json/json.py
TestType
for JSON benchmarks.plaintext.py
Implement plaintext benchmark test type
benchmarks/plaintext/plaintext.py
TestType
for plaintext benchmarks.query.py
Implement query benchmark test type
benchmarks/query/query.py
TestType
for query benchmarks.update.py
Implement update benchmark test type
benchmarks/update/update.py
TestType
for update benchmarks.verifications.py
Add verification functions for benchmark responses
benchmarks/verifications.py
concurrency.sh
Add concurrency benchmark script using wrk
benchmarks/load-testing/wrk/concurrency.sh
wrk
.pipeline.sh
Add pipeline benchmark script using wrk
benchmarks/load-testing/wrk/pipeline.sh
wrk
.query.sh
Add query benchmark script using wrk
benchmarks/load-testing/wrk/query.sh
wrk
.pipeline.lua
Add Lua script for pipeline request handling in wrk
benchmarks/load-testing/wrk/pipeline.lua
wrk
.1 files
README.md
Add README template for new benchmark tests
benchmarks/pre-benchmarks/README.md