-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complete the new TA pipeline implementations #1033
base: main
Are you sure you want to change the base?
Conversation
joseph-sentry
commented
Jan 24, 2025
- creates the new cache analytics task
- creates the new ta process flakes task
- TA finisher uses new TADriver interface and ta_utils
- TA finisher queues up new tasks
- generalize the way bigquery accepts query parameters - change type of params arg in bigquery_service to sequence - feat: add upload_id field to ta_testrun protobuf - add flags_hash field to ta_testrun protobuf - create new testid generation function - add test_id to ta_testrun proto - add flaky_failure to testrun protobuf - handle flaky failures in ta_storage.bq - create sql queries for reading from bq - write tests for ta_storage.bq aggregate queries
Improve TADriver interface - add some more methods to the TADriver - implement a base constructor - modify the write_testruns interface - implement all methods in BQ and PG - improve BQ and PG tests - modify use of TADriver interface in processor and finishers - update django settings to include new settings - TODO: modify requirements to suitable shared version - create ta_utils to replace test_results in the future - the reason for this is that we want a slightly different implementation of the test results notifier for the new TA pipeline
❌ 12 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 12 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 12 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 12 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 24 Tests Failed:
View the top 3 failed tests by shortest run time
📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
Flake.count != (Flake.recent_passes_count + Flake.fail_count), | ||
|
||
def get_postgres_test_data( | ||
db_session: Session, repo: Repository, commit_sha: str, commit_yaml: UserYaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the repo
and commit_yaml
are only used to initialize the driver, and for the should_do_flaky_detection
check.
instead, you could pass the driver and that flag from the outside. That way you only need a single function instead of two.
# get all uploads pending process flakes in the entire repo? why stop at a given commit :D | ||
uploads_to_process = ReportSession.objects.filter( | ||
report__report_type=CommitReport.ReportType.TEST_RESULTS.value, | ||
report__commit__repository__repoid=repo_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
report__commit__repository__repoid=repo_id, | |
report__commit__repository=repo_id, |
Not sure if the ORM is smart enough to do this automatically, but this could potentially avoid a join on repository
.
|
||
if settings.BIGQUERY_WRITE_ENABLED: | ||
bq = BQDriver(repo_id) | ||
bq.write_flakes([upload for upload in uploads_to_process]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bq.write_flakes([upload for upload in uploads_to_process]) | |
bq.write_flakes(list(uploads_to_process)) |