Skip to content

Commit

Permalink
Merge pull request #117 from charlottevdscheun/fix/incremental_overwrite
Browse files Browse the repository at this point in the history
replace partitionOverwriteMode inside merge strategy
  • Loading branch information
jtcohen6 authored Nov 5, 2020
2 parents 9d04452 + a34527a commit e7d73ef
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ The following configurations can be supplied to models run with the dbt-spark pl
**Incremental Models**

To use incremental models, specify a `partition_by` clause in your model config. The default incremental strategy used is `insert_overwrite`, which will overwrite the partitions included in your query. Be sure to re-select _all_ of the relevant
data for a partition when using the `insert_overwrite` strategy.
data for a partition when using the `insert_overwrite` strategy. If a `partition_by` config is not specified, dbt will overwrite the entire table as an atomic operation, replacing it with new data of the same schema. This is analogous to `truncate` + `insert`.

```
{{ config(
Expand Down
8 changes: 5 additions & 3 deletions dbt/include/spark/macros/materializations/incremental.sql
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,11 @@
{% do dbt_spark_validate_merge(file_format) %}
{% endif %}

{% call statement() %}
set spark.sql.sources.partitionOverwriteMode = DYNAMIC
{% endcall %}
{% if config.get('partition_by') %}
{% call statement() %}
set spark.sql.sources.partitionOverwriteMode = DYNAMIC
{% endcall %}
{% endif %}

{% call statement() %}
set spark.sql.hive.convertMetastoreParquet = false
Expand Down

0 comments on commit e7d73ef

Please sign in to comment.