You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of the dbt-sugar doc flow, we want to allow people to add tests to columns. While we can stick with just populating their yaml files, it would be great if we could test their model for them at add-time.
For example, if a user were to say "I want column foo to be unique" when we're about to add it to their model's schema.yml file we would actually fire the test query. If the test fails, we would tell them that it failed and not add it.
The issue with doing that is that we could end up going down the rabbit-hole or re-implementing dbt's builtins and we wouldn't be able to easily integrate or support other custom tests which users may write as macros or import via packages.
There is probably a way to just use dbt as an API or peek into the code to find the sql templates and test that way.
One potentially dirty way would be to generate subprocess calls that trigger stuff like dbt test -m model with a temporary generated yaml or so. It would work and wouldn't make us rely to much on the dbt python API which isn't yet marked as stable but it's dirty and involved because we'd have to parse logs etc.
The text was updated successfully, but these errors were encountered:
@bastienboutonnet Interesting! Idea being, you want to warn users who are trying to add tests that currently fail?
I would definitely discourage from hooking too deeply into dbt's tasks or python methods. As you say, they're undocumented and liable to change; we're also likely to change the innards of tests in particular for v0.20.0.
That said, you could probably get away with wrapping the main point of entry, handle_and_check, in a way that's similar to how we do it for integration tests: run_dbt_and_check and run_dbt, the latter offering the extra sugar of expact_pass = False. We're actually thinking of wrapping and exposing the integration testing framework as a module for the benefit of dbt adapter maintainers (dbt-labs/dbt-adapter-tests#13).
@jtcohen6 Thanks a lot for chiming into this! Thanks for the heads up on v.0.20.0 I think I'll indeed stay away as much as possible from the insides until things get settled a bit.
I think what you propose may just work! I'll have to have a play around in the weekend or so. I'm not in a huge rush on this feature. But yeah I think this would give the ability to refer to a test that lives in dbt space or the user's macros while not having to re-implement the wheel in our code.
As part of the
dbt-sugar doc
flow, we want to allow people to add tests to columns. While we can stick with just populating their yaml files, it would be great if we could test their model for them at add-time.For example, if a user were to say "I want column foo to be unique" when we're about to add it to their model's schema.yml file we would actually fire the test query. If the test fails, we would tell them that it failed and not add it.
The issue with doing that is that we could end up going down the rabbit-hole or re-implementing dbt's builtins and we wouldn't be able to easily integrate or support other custom tests which users may write as macros or import via packages.
There is probably a way to just use dbt as an API or peek into the code to find the sql templates and test that way.
One potentially dirty way would be to generate
subprocess
calls that trigger stuff likedbt test -m model
with a temporary generated yaml or so. It would work and wouldn't make us rely to much on the dbt python API which isn't yet marked as stable but it's dirty and involved because we'd have to parse logs etc.The text was updated successfully, but these errors were encountered: