-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expect Failure Mode for Tests #2982
Comments
@zemekeneng This is really interesting! Thanks for the detailed writeup. We've spent years building the instincts around "no rows / 0 count means pass, yes rows / >0 count means fail," "select/count all the records you don't want." While it's now an acquired taste, and one I've come to like, I don't think it's radically more intuitive than "pass and fail when I say so."
This is a very good point. By writing tests with an eye only toward them passing—a major achievement, to have data be exactly the way we hope—we're still missing an important piece of the software / unit testing puzzle. FutureMost directly, this issue feels kin to #2219, which proposes adapting the Your explanation above has helped me realized that Instead, I like your proposal for simple boolean expressions using What do you think about a spec like this: version: 2
models:
- name: my_model
columns:
- name: not_unique_col
tests:
- test_one:
warn: ">0" # cf. `severity: warn`, `warn_after: 0`
- test_two:
error: "<50%" # raise an error if majority of records *are not* returned by the test query
- test_three:
warn: "!=0"
error: "!=0" # `error` always takes precedence over `warn` By default, tests have Current
This is the direction I was thinking in, since it's possible today and would mostly achieve the desired behavior. I imagine your config would look something like: models:
- name: my_model
columns:
- name: not_unique_col
tests:
- test_fail:
- test_macro: test_unique
expect: "=1" It would require some tricky business to render the test macro from the context, but it could be hacked. Much trickier would be configuring the severity based on the test result. I'm going to tag this for consideration in v0.20.0 ("The Test Release"); I think it neatly glosses #2219. |
This is something I really want too but I'd like to extend the scope to also include compile errors. If this should be raised as a separate issue please let me know. This would be a bit different to the original issue as you'd be checking for a compile error rather than a database error. I wanted this feature when writing schema tests that raised compile errors to validate inputs (like this in dbt-utils). Currently I manually check the compile errors are being raised when they should, but don't see a way of adding creating a test to do that. Something like below would be useful:
|
Hey @MarkMacArdle, what's the context in which you'd want a test to raise a compilation error? Would it be for integration-testing the This does feel a bit different to me from the requested feature, which proposes an expected/thresholded failure mode only when the test SQL succeeds and returns results. I don't see a config like |
Closing in favor of the concrete proposal for |
Hi @jtcohen6! I'd like to reopen this. In traditional software engineering, one might have unit tests that specifically expect failure. This would be to ensure that known-good code should throw specific errors on known-bad data, eg if your code specifically defines conditions of inputs for which it throws prior art:
In DBT, we can only test via dbt tests, and so for me the most obvious way to have something akin to unit tests is if we can use something like a seed file with known-bad data, test on it, and, conditioned on the target/environment, expect a failure. I imagine this running in CI/CD. I'm not sure what this needs to look like, since in a way we'd want to be able to associate specific targets/environments with specific failures. so there might be a need to support context awareness via eg jinja. so the |
@mike-weinberg Have you had a chance to read one of the coolest GitHub discussions from the past few months? Better yet: this work is in flight! We're planning to have it in beta a few months from now. |
AMAZE. That will work for us, thanks! |
Describe the feature
An
expect
flag for tests that causes the test to pass when it would ordinarily fail, and vice versa.This would accomplish 2 things:
expect: fail
flag, test coverage for macros of all kinds can be dramatically improved.Describe alternatives you've considered
One alternative is to handle test failures in the CI pipeline. Using tags like
expect_fail
, we could exclude these tests from the standard test suite, and run them in a separate pipeline that expects the tests to fail.Another alternative is to have some type of
test_fail
macro, that would consume any other test, run its sql as an ephemeral model wrapped inI am not sure there is a clean way to pass arguments between the generic
test_fail
and the specific test to be failed.Finally, we could rewrite all tests as macros that produce tables and make a
pass
version and afail
version.Who will this benefit?
Everyone that uses shared tooling because of improved test coverage. Probably most dbt users.
Are you interested in contributing this feature?
Yes. If we can settle on an interface, I would be excited to work on this. @clrcrl tells me that you have plans to decouple the test dataset from the count(*) wrapper. This might be a good opportunity to allow the
expect
flag on the test config to default toexpect: =0
so as to allow=1
or>0
or perhaps this plus synonymspass
as=0
andfail
as>0
.The text was updated successfully, but these errors were encountered: