diff --git a/website/blog/2021-11-22-dbt-labs-pr-template.md b/website/blog/2021-11-22-dbt-labs-pr-template.md
index 439a02371ec..02a3ae5f44e 100644
--- a/website/blog/2021-11-22-dbt-labs-pr-template.md
+++ b/website/blog/2021-11-22-dbt-labs-pr-template.md
@@ -252,4 +252,4 @@ Once the file is added, name it whatever you want to make it clear that it’s y
With that, you now have a pull request template in your GitHub repository that can help your team follow analytics engineering best practices.
-To dive deeper into how we use it as part of the analytics engineering workflow, check out the free [dbt Fundamentals on-demand course](https://courses.getdbt.com/courses/fundamentals).
+To dive deeper into how we use it as part of the analytics engineering workflow, check out the free [dbt Fundamentals on-demand course](https://learn.getdbt.com/courses/dbt-fundamentals).
diff --git a/website/blog/2021-11-22-primary-keys.md b/website/blog/2021-11-22-primary-keys.md
index d5f87cddd94..89022606153 100644
--- a/website/blog/2021-11-22-primary-keys.md
+++ b/website/blog/2021-11-22-primary-keys.md
@@ -140,4 +140,4 @@ You can query out primary key columns from the `pg_index` and `pg_attribute` adm
## Have you started testing primary keys yet?
-If you’re looking for a deeper dive on testing primary keys, definitely check out the [dbt Fundamentals course](https://courses.getdbt.com/courses/fundamentals), which includes a full section with examples + practice on data testing in dbt.
+If you’re looking for a deeper dive on testing primary keys, definitely check out the [dbt Fundamentals course](https://learn.getdbt.com/courses/dbt-fundamentals), which includes a full section with examples + practice on data testing in dbt.
diff --git a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
index f3a24a0febd..52b2746ca14 100644
--- a/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
+++ b/website/blog/2021-12-05-how-to-build-a-mature-dbt-project-from-scratch.md
@@ -107,7 +107,7 @@ We’re going to:
**Project Appearance**
-Let's check in on the growth of [our project](https://github.com/dbt-labs/dbt-project-maturity/tree/main/2-toddlerhood). We've broken some of our logic into its own model — our original script had repetitive logic in subqueries, now it's following a key principle of analytics engineering: Don't Repeat Yourself (DRY). For more information on how to refactor your SQL queries for Modularity - check out our [free on-demand course](https://courses.getdbt.com/courses/refactoring-sql-for-modularity).
+Let's check in on the growth of [our project](https://github.com/dbt-labs/dbt-project-maturity/tree/main/2-toddlerhood). We've broken some of our logic into its own model — our original script had repetitive logic in subqueries, now it's following a key principle of analytics engineering: Don't Repeat Yourself (DRY). For more information on how to refactor your SQL queries for Modularity - check out our [free on-demand course](https://learn.getdbt.com/courses/refactoring-sql-for-modularity).
We also added our first [YML files](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/). Here, we have one yml file to [configure our sources](https://github.com/dbt-labs/dbt-project-maturity/blob/main/2-toddlerhood/models/source.yml), and one one yml file to [describe our models](https://github.com/dbt-labs/dbt-project-maturity/blob/main/2-toddlerhood/models/schema.yml). We're just starting with basic declarations of our sources, testing using dbt built in tests, and a model-level description -- these are the first steps of a project just learning to walk!
diff --git a/website/blog/2022-05-19-redshift-configurations-dbt-model-optimizations.md b/website/blog/2022-05-19-redshift-configurations-dbt-model-optimizations.md
index c01194360f1..0805d82fa81 100644
--- a/website/blog/2022-05-19-redshift-configurations-dbt-model-optimizations.md
+++ b/website/blog/2022-05-19-redshift-configurations-dbt-model-optimizations.md
@@ -229,7 +229,7 @@ This is where the power of dbt modeling really comes in! dbt allows you to break
The following are some methods I’ve used in order to properly optimize run times, leveraging dbt’s ability to modularize models.
:::note Note
-I won’t get into our modeling methodology at dbt Labs in this article, but there are [plenty of resources](https://courses.getdbt.com/) to understand what might be happening in the following DAGs!
+I won’t get into our modeling methodology at dbt Labs in this article, but there are [plenty of resources](https://learn.getdbt.com/) to understand what might be happening in the following DAGs!
:::
### Staggered joins
diff --git a/website/blog/2022-07-19-migrating-from-stored-procs.md b/website/blog/2022-07-19-migrating-from-stored-procs.md
index e2afdbfcd66..ce7afb5d1a5 100644
--- a/website/blog/2022-07-19-migrating-from-stored-procs.md
+++ b/website/blog/2022-07-19-migrating-from-stored-procs.md
@@ -221,5 +221,5 @@ dbt Labs has developed a number of related resources you can use to learn more a
- [Refactoring legacy SQL to dbt](https://docs.getdbt.com/tutorial/refactoring-legacy-sql)
- [The case for the ELT workflow](https://www.getdbt.com/analytics-engineering/case-for-elt-workflow/)
-- [Refactoring SQL for modularity](https://courses.getdbt.com/courses/refactoring-sql-for-modularity)
+- [Refactoring SQL for modularity](https://learn.getdbt.com/courses/refactoring-sql-for-modularity)
- [Data modeling techniques for modularity](https://www.getdbt.com/analytics-engineering/modular-data-modeling-technique/)
diff --git a/website/blog/2022-11-30-dbt-project-evaluator.md b/website/blog/2022-11-30-dbt-project-evaluator.md
index 3ea7a459c35..fa4cb54aab6 100644
--- a/website/blog/2022-11-30-dbt-project-evaluator.md
+++ b/website/blog/2022-11-30-dbt-project-evaluator.md
@@ -34,7 +34,7 @@ Throughout these engagements, we began to take note of the common issues many an
Maybe your team is facing some of these issues right now 👀 And that’s okay! We know that building an effective, scalable dbt project takes a lot of effort and brain power. Maybe you’ve inherited a legacy dbt project with a mountain of tech debt. Maybe you’re starting from scratch. Either way it can be difficult to know the best way to set your team up for success. Don’t worry, you’re in the right place!
-Through solving these problems over and over, the Professional Services team began to hone our best practices for working with dbt and how analytics engineers could improve their dbt project. We added “solutions reviews'' to our list of service offerings — client engagements in which we evaluate a given dbt project and provide specific recommendations to improve performance, save developer time, and prevent misuse of dbt’s features. And in an effort to share these best practices with the wider dbt community, we developed a *lot* of content. We wrote articles on the Developer Blog (see [1](https://docs.getdbt.com/blog/on-the-importance-of-naming), [2](https://discourse.getdbt.com/t/your-essential-dbt-project-checklist/1377), and [3](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview)), gave [Coalesce talks](https://www.getdbt.com/coalesce-2020/auditing-model-layers-and-modularity-with-your-dag/), and created [training courses](https://courses.getdbt.com/courses/refactoring-sql-for-modularity).
+Through solving these problems over and over, the Professional Services team began to hone our best practices for working with dbt and how analytics engineers could improve their dbt project. We added “solutions reviews'' to our list of service offerings — client engagements in which we evaluate a given dbt project and provide specific recommendations to improve performance, save developer time, and prevent misuse of dbt’s features. And in an effort to share these best practices with the wider dbt community, we developed a *lot* of content. We wrote articles on the Developer Blog (see [1](https://docs.getdbt.com/blog/on-the-importance-of-naming), [2](https://discourse.getdbt.com/t/your-essential-dbt-project-checklist/1377), and [3](https://docs.getdbt.com/best-practices/how-we-structure/1-guide-overview)), gave [Coalesce talks](https://www.getdbt.com/coalesce-2020/auditing-model-layers-and-modularity-with-your-dag/), and created [training courses](https://learn.getdbt.com/courses/refactoring-sql-for-modularity).
TIme and time again, we found that when teams are aligned with these best practices, their projects are more:
diff --git a/website/blog/2023-03-30-guide-to-debug-in-jinja.md b/website/blog/2023-03-30-guide-to-debug-in-jinja.md
index 27935ad484f..229cbe13ed7 100644
--- a/website/blog/2023-03-30-guide-to-debug-in-jinja.md
+++ b/website/blog/2023-03-30-guide-to-debug-in-jinja.md
@@ -9,7 +9,7 @@ date: 2023-03-29
is_featured: true
---
-*Editor's note—this post assumes intermediate knowledge of Jinja and macros development in dbt. For an introduction to Jinja in dbt check out [the documentation](https://docs.getdbt.com/docs/build/jinja-macros) and the free self-serve course on [Jinja, Macros, Pacakages](https://courses.getdbt.com/courses/jinja-macros-packages).*
+*Editor's note—this post assumes intermediate knowledge of Jinja and macros development in dbt. For an introduction to Jinja in dbt check out [the documentation](https://docs.getdbt.com/docs/build/jinja-macros) and the free self-serve course on [Jinja, Macros, Packages](https://learn.getdbt.com/courses/jinja-macros-and-packages).*
Jinja brings a lot of power to dbt, allowing us to use `ref()`, `source()` , conditional code, and [macros](https://docs.getdbt.com/docs/build/jinja-macros). But, while Jinja brings flexibility, it also brings complexity, and like many times with code, things can run in expected ways.
diff --git a/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md b/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md
index 44499c51ec5..bc3e9405bf4 100644
--- a/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md
+++ b/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md
@@ -74,7 +74,7 @@ Now that we’ve done the pipeline work to set up our metrics for the semantic l
## Our Finance, Operations and GTM teams are all looking at the same metrics 😊
-To query to Semantic Layer you have two paths: you can query metrics directly through the Semantic Layer APIs or use one of our [first-class integrations](https://docs.getdbt.com/docs/use-dbt-semantic-layer/avail-sl-integrations). Our analytics team and product teams are big Hex users, while our operations and finance teams live and breathe Google Sheets, so it’s important for us to have the same metric definitions available in both tools.
+To query to Semantic Layer you have two paths: you can query metrics directly through the Semantic Layer APIs or use one of our [first-class integrations](https://docs.getdbt.com/docs/cloud-integrations/avail-sl-integrations). Our analytics team and product teams are big Hex users, while our operations and finance teams live and breathe Google Sheets, so it’s important for us to have the same metric definitions available in both tools.
The leg work of building our pipeline and defining metrics is all done, which makes last-mile consumption much easier. First, we set up a launch dashboard in Hex as the source of truth for semantic layer product metrics. This tool is used by cross-functional partners like marketing, sales, and the executive team to easily check product and usage metrics like total semantic layer queries, or weekly active semantic layer users. To set up our Hex connection, we simply enter a few details from our dbt Cloud environment and then we can work with metrics directly in Hex notebooks. We can use the JDBC interface, or use Hex’s GUI metric builder to build reports. We run all our WBRs off this dashboard, which allows us to spot trends in consumption and react quickly to changes in our business.
diff --git a/website/blog/2024-05-02-semantic-layer-llm.md b/website/blog/2024-05-02-semantic-layer-llm.md
index 4b1e2e89add..8c2feaafaed 100644
--- a/website/blog/2024-05-02-semantic-layer-llm.md
+++ b/website/blog/2024-05-02-semantic-layer-llm.md
@@ -293,6 +293,6 @@ grant usage on function submit_sl_request(string) to role public;
## Wrapping Up
-Building this application has been an absolute blast for multiple reasons. First, we’ve been able to use it internally within the SA org to demonstrate how the semantic layer works. It provides yet another [integration](https://docs.getdbt.com/docs/use-dbt-semantic-layer/avail-sl-integrations) point that further drives home the fundamental value prop of using the Semantic Layer. Secondly, and more importantly, it has served as an example to those customers thinking about (or being pushed to think about) how they can best utilize these technologies to further their goals. Finally, I’ve been able to be heads down, hands on keyboard learning about all of these interesting technologies and stepping back into the role of builder is something I will never turn down!
+Building this application has been an absolute blast for multiple reasons. First, we’ve been able to use it internally within the SA org to demonstrate how the semantic layer works. It provides yet another [integration](https://docs.getdbt.com/docs/cloud-integrations/avail-sl-integrations) point that further drives home the fundamental value prop of using the Semantic Layer. Secondly, and more importantly, it has served as an example to those customers thinking about (or being pushed to think about) how they can best utilize these technologies to further their goals. Finally, I’ve been able to be heads down, hands on keyboard learning about all of these interesting technologies and stepping back into the role of builder is something I will never turn down!
Finally, to see the entire code, from Snowflake to Streamlit, check out the repo [here](https://github.com/dpguthrie/dbt-sl-cortex-streamlit-blog/tree/main?tab=readme-ov-file).
diff --git a/website/dbt-versions.js b/website/dbt-versions.js
index efa9450ae37..c456bda8d8e 100644
--- a/website/dbt-versions.js
+++ b/website/dbt-versions.js
@@ -15,14 +15,6 @@ exports.versions = [
version: "1.5",
EOLDate: "2024-04-27",
},
- {
- version: "1.4",
- EOLDate: "2024-01-25",
- },
- {
- version: "1.3",
- EOLDate: "2023-10-12",
- },
]
exports.versionedPages = [
@@ -154,14 +146,6 @@ exports.versionedPages = [
"page": "reference/resource-properties/versions",
"firstVersion": "1.5",
},
- {
- "page": "reference/dbt-jinja-functions/local-md5",
- "firstVersion": "1.4",
- },
- {
- "page": "reference/warehouse-setups/fal-setup",
- "firstVersion": "1.3",
- },
{
"page": "reference/resource-configs/on_configuration_change",
"firstVersion": "1.6",
diff --git a/website/docs/best-practices/dbt-unity-catalog-best-practices.md b/website/docs/best-practices/dbt-unity-catalog-best-practices.md
index a55e1d121af..73d308232f3 100644
--- a/website/docs/best-practices/dbt-unity-catalog-best-practices.md
+++ b/website/docs/best-practices/dbt-unity-catalog-best-practices.md
@@ -61,7 +61,7 @@ Ready to start transforming your Unity Catalog datasets with dbt?
Check out the resources below for guides, tips, and best practices:
- [How we structure our dbt projects](/best-practices/how-we-structure/1-guide-overview)
-- [Self-paced dbt fundamentals training videos](https://courses.getdbt.com/courses/fundamentals)
+- [Self-paced dbt fundamentals training course](https://learn.getdbt.com/courses/dbt-fundamentals)
- [Customizing CI/CD](/guides/custom-cicd-pipelines)
- [Debugging errors](/guides/debug-errors)
- [Writing custom generic tests](/best-practices/writing-custom-generic-tests)
diff --git a/website/docs/docs/build/about-metricflow.md b/website/docs/docs/build/about-metricflow.md
index 4ba15ceeacf..deaa94a8149 100644
--- a/website/docs/docs/build/about-metricflow.md
+++ b/website/docs/docs/build/about-metricflow.md
@@ -17,7 +17,7 @@ Before you start, consider the following guidelines:
- Define metrics in YAML and query them using these [new metric specifications](https://github.com/dbt-labs/dbt-core/discussions/7456).
- You must be on [dbt version](/docs/dbt-versions/upgrade-dbt-version-in-cloud) 1.6 or higher to use MetricFlow.
- Use MetricFlow with Snowflake, BigQuery, Databricks, Postgres (dbt Core only), or Redshift.
-- Discover insights and query your metrics using the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and its diverse range of [available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations).
+- Discover insights and query your metrics using the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) and its diverse range of [available integrations](/docs/cloud-integrations/avail-sl-integrations).
## MetricFlow
diff --git a/website/docs/docs/build/build-metrics-intro.md b/website/docs/docs/build/build-metrics-intro.md
index 8aa6d8ad5aa..7a6e612d750 100644
--- a/website/docs/docs/build/build-metrics-intro.md
+++ b/website/docs/docs/build/build-metrics-intro.md
@@ -58,7 +58,7 @@ MetricFlow allows you to:
@@ -67,5 +67,5 @@ MetricFlow allows you to:
- [Quickstart guide with the dbt Semantic Layer](/guides/sl-snowflake-qs)
- [The dbt Semantic Layer: what's next](https://www.getdbt.com/blog/dbt-semantic-layer-whats-next/) blog
-- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
+- [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs)
diff --git a/website/docs/docs/build/custom-aliases.md b/website/docs/docs/build/custom-aliases.md
index 8d54dd1a841..9e9f91f968d 100644
--- a/website/docs/docs/build/custom-aliases.md
+++ b/website/docs/docs/build/custom-aliases.md
@@ -73,31 +73,6 @@ To override dbt's alias name generation, create a macro named `generate_alias_na
The default implementation of `generate_alias_name` simply uses the supplied `alias` config (if present) as the model alias, otherwise falling back to the model name. This implementation looks like this:
-
-
-
-
-```jinja2
-{% macro generate_alias_name(custom_alias_name=none, node=none) -%}
-
- {%- if custom_alias_name is none -%}
-
- {{ node.name }}
-
- {%- else -%}
-
- {{ custom_alias_name | trim }}
-
- {%- endif -%}
-
-{%- endmacro %}
-
-```
-
-
-
-
-
@@ -176,18 +151,9 @@ If these models should indeed have the same database identifier, you can work ar
#### Model versions
-
-
-New in v1.5
-
-
-
-
-
**Related documentation:**
- [Model versions](/docs/collaborate/govern/model-versions)
- [`versions`](/reference/resource-properties/versions#alias)
By default, dbt will create versioned models with the alias `_v`, where `` is that version's unique identifier. You can customize this behavior just like for non-versioned models by configuring a custom `alias` or re-implementing the `generate_alias_name` macro.
-
diff --git a/website/docs/docs/build/exposures.md b/website/docs/docs/build/exposures.md
index c6bd6ca6122..bcbe819d98c 100644
--- a/website/docs/docs/build/exposures.md
+++ b/website/docs/docs/build/exposures.md
@@ -12,8 +12,6 @@ Exposures make it possible to define and describe a downstream use of your dbt p
Exposures are defined in `.yml` files nested under an `exposures:` key.
-
-
```yaml
@@ -42,38 +40,6 @@ exposures:
-
-
-
-
-
-
-```yaml
-version: 2
-
-exposures:
-
- - name: weekly_jaffle_report
- type: dashboard
- maturity: high
- url: https://bi.tool/dashboards/1
- description: >
- Did someone say "exponential growth"?
-
- depends_on:
- - ref('fct_orders')
- - ref('dim_customers')
- - source('gsheets', 'goals')
-
- owner:
- name: Callum McData
- email: data@jaffleshop.com
-```
-
-
-
-
-
### Available properties
_Required:_
@@ -81,20 +47,9 @@ _Required:_
- **type**: one of `dashboard`, `notebook`, `analysis`, `ml`, `application` (used to organize in docs site)
- **owner**: `name` or `email` required; additional properties allowed
-
-
_Expected:_
- **depends_on**: list of refable nodes, including `metric`, `ref`, and `source`. While possible, it is highly unlikely you will ever need an `exposure` to depend on a `source` directly.
-
-
-
-
-_Expected:_
-- **depends_on**: list of refable nodes, including `ref` and `source` (While possible, it is highly unlikely you will ever need an `exposure` to depend on a `source` directly)
-
-
-
_Optional:_
- **label**: May contain spaces, capital letters, or special characters.
- **url**: Activates and populates the link to **View this exposure** in the upper right corner of the generated documentation site
diff --git a/website/docs/docs/build/incremental-strategy.md b/website/docs/docs/build/incremental-strategy.md
index 998ac84803a..e1a67f93da7 100644
--- a/website/docs/docs/build/incremental-strategy.md
+++ b/website/docs/docs/build/incremental-strategy.md
@@ -47,16 +47,12 @@ The `merge` strategy is available in dbt-postgres and dbt-redshift beginning in
-
-
:::note Snowflake Configurations
-dbt v1.3 changed the default materialization for incremental table merges from `temporary table` to `view`. For more information about this change and instructions for setting the configuration to a temp table, please read about [Snowflake temporary tables](/reference/resource-configs/snowflake-configs#temporary-tables).
+dbt has changed the default materialization for incremental table merges from `temporary table` to `view`. For more information about this change and instructions for setting the configuration to a temp table, please read about [Snowflake temporary tables](/reference/resource-configs/snowflake-configs#temporary-tables).
:::
-
-
### Configuring incremental strategy
The `incremental_strategy` config can either be defined in specific models or
@@ -90,8 +86,6 @@ select ...
-
-
### Strategy-specific configs
If you use the `merge` strategy and specify a `unique_key`, by default, dbt will entirely overwrite matched rows with new values.
@@ -134,10 +128,6 @@ select ...
-
-
-
-
### About incremental_predicates
`incremental_predicates` is an advanced use of incremental models, where data volume is large enough to justify additional investments in performance. This config accepts a list of any valid SQL expression(s). dbt does not check the syntax of the SQL statements.
@@ -216,8 +206,6 @@ The syntax depends on how you configure your `incremental_strategy`:
- There's a decent amount of conceptual overlap with the `insert_overwrite` incremental strategy.
:::
-
-
### Built-in strategies
Before diving into [custom strategies](#custom-strategies), it's important to understand the built-in incremental strategies in dbt and their corresponding macros:
diff --git a/website/docs/docs/build/packages.md b/website/docs/docs/build/packages.md
index aae8cc94361..b3705effcce 100644
--- a/website/docs/docs/build/packages.md
+++ b/website/docs/docs/build/packages.md
@@ -145,8 +145,6 @@ To find the latest release for a package, navigate to the `Releases` tab in the
As of v0.14.0, dbt will warn you if you install a package using the `git` syntax without specifying a version (see below).
-
-
### Internally hosted tarball URL
Some organizations have security requirements to pull resources only from internal services. To address the need to install packages from hosted environments such as Artifactory or cloud storage buckets, dbt Core enables you to install packages from internally-hosted tarball URLs.
@@ -160,8 +158,6 @@ packages:
Where `name: 'dbt_utils'` specifies the subfolder of `dbt_packages` that's created for the package source code to be installed within.
-
-
### Private packages
#### SSH Key Method (Command Line only)
diff --git a/website/docs/docs/build/python-models.md b/website/docs/docs/build/python-models.md
index 213d588790b..0a089d263c7 100644
--- a/website/docs/docs/build/python-models.md
+++ b/website/docs/docs/build/python-models.md
@@ -3,7 +3,7 @@ title: "Python models"
id: "python-models"
---
-dbt Core v1.3 adds support for Python models. Note that only [specific data platforms](#specific-data-platforms) support dbt-py models.
+Note that only [specific data platforms](#specific-data-platforms) support dbt-py models.
We encourage you to:
- Read [the original discussion](https://github.com/dbt-labs/dbt-core/discussions/5261) that proposed this feature.
@@ -16,7 +16,6 @@ We encourage you to:
dbt Python (`dbt-py`) models can help you solve use cases that can't be solved with SQL. You can perform analyses using tools available in the open-source Python ecosystem, including state-of-the-art packages for data science and statistics. Before, you would have needed separate infrastructure and orchestration to run Python transformations in production. Python transformations defined in dbt are models in your project with all the same capabilities around testing, documentation, and lineage.
-
@@ -257,7 +256,7 @@ def model(dbt, session):
### Materializations
Python models support these materializations:
-- `table` (default)
+- `table` (default)
- `incremental`
Incremental Python models support all the same [incremental strategies](/docs/build/incremental-strategy) as their SQL counterparts. The specific strategies supported depend on your adapter. As an example, incremental models are supported on BigQuery with Dataproc for the `merge` incremental strategy; the `insert_overwrite` strategy is not yet supported.
@@ -782,4 +781,3 @@ You can also install packages at cluster creation time by [defining cluster prop
-
diff --git a/website/docs/docs/build/sql-models.md b/website/docs/docs/build/sql-models.md
index d0dc8becdcb..87e063cdcdb 100644
--- a/website/docs/docs/build/sql-models.md
+++ b/website/docs/docs/build/sql-models.md
@@ -18,13 +18,8 @@ If you're new to dbt, we recommend that you read a [quickstart guide](/guides) t
:::
-
+dbt's Python capabilities are an extension of its capabilities with SQL models. If you're new to dbt, we recommend that you read this page first, before reading: ["Python Models"](/docs/build/python-models)
-Starting in v1.3, dbt Core adds support for **Python models**.
-
-dbt's Python capabilities are an extension of its capabilities with SQL models. If you're new to dbt, we recommend that you read this page first, before reading: ["Python Models"](/docs/building-a-dbt-project/building-models/python-models)
-
-
A SQL model is a `select` statement. Models are defined in `.sql` files (typically in your `models` directory):
- Each `.sql` file contains one model / `select` statement
diff --git a/website/docs/docs/cloud-integrations/about-snowflake-native-app.md b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
new file mode 100644
index 00000000000..fa37e15dcc6
--- /dev/null
+++ b/website/docs/docs/cloud-integrations/about-snowflake-native-app.md
@@ -0,0 +1,47 @@
+---
+title: "About the dbt Snowflake Native App"
+id: "snowflake-native-app"
+description: "An overview of the dbt Snowflake Native App for dbt Cloud accounts"
+pagination_prev: null
+pagination_next: "docs/cloud-integrations/set-up-snowflake-native-app"
+---
+
+# About the dbt Snowflake Native App
+
+The dbt Snowflake Native App — powered by the Snowflake Native App Framework and Snowpark Container Services — extends your dbt Cloud experience into the Snowflake user interface. You'll be able to access these three experiences with your Snowflake login:
+
+- **dbt Explorer** — An embedded version of [dbt Explorer](/docs/collaborate/explore-projects)
+- **Ask dbt** — A dbt-assisted chatbot, powered by [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), OpenAI, and Snowflake Cortex
+- **Orchestration observability** — A view into the [job run history](/docs/deploy/run-visibility) and the ability to trigger Snowflake tasks with [deploy jobs](/docs/deploy/deploy-jobs).
+
+These experiences enable you to extend what's been built with dbt Cloud to users who have traditionally worked downstream from the dbt project, such as BI analysts and technical stakeholders.
+
+For installation instructions, refer to [Set up the dbt Snowflake Native App](/docs/cloud-integrations/set-up-snowflake-native-app).
+
+## Architecture
+
+There are three tools connected to the operation of the dbt Snowflake Native App:
+
+| Tool | Description |
+|------------------------------------|-------------|
+| Consumer’s Snowflake account | The location of where the Native App is installed, powered by Snowpark Container Services.
The Native App makes calls to the dbt Cloud APIs and Datadog APIs (for logging) using [Snowflake's external network access](https://docs.snowflake.com/en/developer-guide/external-network-access/external-network-access-overview).
To power the **Ask dbt** chatbot, the dbt Semantic Layer accesses the Cortex LLM to execute queries and generate text based on the prompt. This is configured when the user sets up the Semantic Layer environment. |
+| dbt product Snowflake account | The location of where the Native App application package is hosted and then distributed into the consumer account.
The consumer's event table is shared to this account for application monitoring and logging. |
+| Consumer’s dbt Cloud account | The Native App interacts with the dbt Cloud APIs for metadata and processing Semantic Layer queries to power the Native App experiences.
The dbt Cloud account also calls the consumer Snowflake account to utilize the warehouse to execute dbt queries for orchestration and the Cortex LLM Arctic to power the **Ask dbt** chatbot. |
+
+The following diagram provides an illustration of the architecture:
+
+
+
+
+## Access
+You can log in to the dbt Snowflake Native App using your regular Snowflake login authentication method. During this [Preview](/docs/dbt-versions/product-lifecycles#dbt-cloud), you do not need dbt Cloud credentials (a dbt Cloud seat) to access the application but this is subject to change.
+
+App users are able to access all information that's available to the API service token.
+
+## Procurement
+The dbt Snowflake Native App is available on the [Snowflake Marketplace](https://app.snowflake.com/marketplace/listing/GZTYZSRT2R3). With the purchase of the listing, users will have access to the Native App and a dbt Cloud account that's on the Enterprise plan.
+
+If you're interested, please [contact us](matilto:sales_snowflake_marketplace@dbtlabs.com) for more information.
+
+## Support
+If you have any questions about the dbt Snowflake Native App, you may [contact our Support team](mailto:dbt-snowflake-marketplace@dbtlabs.com) for help. Please provide information about your installation of the Native App, including your dbt Cloud account ID and Snowflake account identifier.
diff --git a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md b/website/docs/docs/cloud-integrations/avail-sl-integrations.md
similarity index 96%
rename from website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
rename to website/docs/docs/cloud-integrations/avail-sl-integrations.md
index 89a88d41eae..30ca750a5c6 100644
--- a/website/docs/docs/use-dbt-semantic-layer/avail-sl-integrations.md
+++ b/website/docs/docs/cloud-integrations/avail-sl-integrations.md
@@ -39,5 +39,5 @@ import AvailIntegrations from '/snippets/_sl-partner-links.md';
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
- [Hex dbt Semantic Layer cells](https://learn.hex.tech/docs/logic-cell-types/transform-cells/dbt-metrics-cells) to set up SQL cells in Hex.
- [Resolve 'Failed APN'](/faqs/Troubleshooting/sl-alpn-error) error when connecting to the dbt Semantic Layer.
-- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
+- [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer)
- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs)
diff --git a/website/docs/docs/cloud-integrations/overview.md b/website/docs/docs/cloud-integrations/overview.md
new file mode 100644
index 00000000000..e0adfe72303
--- /dev/null
+++ b/website/docs/docs/cloud-integrations/overview.md
@@ -0,0 +1,27 @@
+---
+title: "About dbt Cloud integrations"
+sidebar_label: "About dbt Cloud integrations"
+pagination_prev: null
+pagination_next: "docs/cloud-integrations/snowflake-native-app"
+---
+
+Many data applications integrate with dbt Cloud, enabling you to leverage the power of dbt for a variety of use cases and workflows.
+
+
+## Integrations with dbt
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/website/docs/docs/use-dbt-semantic-layer/gsheets.md b/website/docs/docs/cloud-integrations/semantic-layer/gsheets.md
similarity index 100%
rename from website/docs/docs/use-dbt-semantic-layer/gsheets.md
rename to website/docs/docs/cloud-integrations/semantic-layer/gsheets.md
diff --git a/website/docs/docs/use-dbt-semantic-layer/tableau.md b/website/docs/docs/cloud-integrations/semantic-layer/tableau.md
similarity index 100%
rename from website/docs/docs/use-dbt-semantic-layer/tableau.md
rename to website/docs/docs/cloud-integrations/semantic-layer/tableau.md
diff --git a/website/docs/docs/cloud-integrations/set-up-snowflake-native-app.md b/website/docs/docs/cloud-integrations/set-up-snowflake-native-app.md
new file mode 100644
index 00000000000..7e9d7c8dc16
--- /dev/null
+++ b/website/docs/docs/cloud-integrations/set-up-snowflake-native-app.md
@@ -0,0 +1,152 @@
+---
+title: "Set up the dbt Snowflake Native App"
+description: "Learn how to set up the dbt Snowflake Native App"
+pagination_prev: "docs/cloud-integrations/snowflake-native-app"
+pagination_next: null
+---
+
+# Set up the dbt Snowflake Native App
+
+The [dbt Snowflake Native App](/docs/cloud-integrations/snowflake-native-app) enables these features within the Snowflake user interface: dbt Explorer, the **Ask dbt** chatbot, and dbt Cloud's orchestration observability features.
+
+Configure both dbt Cloud and Snowflake to set up this integration. The high-level steps are described as follows:
+
+1. Set up the **Ask dbt** configuration.
+1. Configure Snowflake.
+1. Configure dbt Cloud.
+1. Purchase and install the dbt Snowflake Native App.
+1. Configure the app.
+1. Verify successful installation of the app.
+1. Onboard new users to the app.
+
+The order of the steps is slightly different if you purchased the public listing of the Native App; you'll start by purchasing the Native App, satisfying the prerequisites, and then completing the remaining steps in order.
+
+## Prerequisites
+The following are the prerequisites for dbt Cloud and Snowflake.
+
+### dbt Cloud
+
+- You must have a dbt Cloud account on the Enterprise plan that's in an AWS Region or Azure region. If you don't already have one, please [contact us](mailto:sales_snowflake_marketplace@dbtlabs.com) to get started.
+ - Currently, Semantic Layer is unavailable for Azure ST instances and the **Ask dbt** chatbot will not function in the dbt Snowflake Native App without it.
+- Your dbt Cloud account must have permission to create a [service token](/docs/dbt-cloud-apis/service-tokens). For details, refer to [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions).
+- There's a dbt Cloud project with [Semantic Layer configured](/docs/use-dbt-semantic-layer/setup-sl) and metrics declared.
+- You have set up a [production deployment environment](/docs/deploy/deploy-environments#set-as-production-environment).
+ - There has been at least one successful job run that includes a `docs generate` step in the deployment environment.
+
+### Snowflake
+
+- You have **ACCOUNTADMIN** access in Snowflake.
+- Your Snowflake account must have access to the Native App/SPCS integration (PrPr until Summit) and NA/SPCS configurations (PuPr at end of June). If you're unsure, please check with your Snowflake account manager.
+- The Snowflake account must be in an AWS Region or Azure region.
+
+## Set up the configuration for Ask dbt
+
+Configure dbt Cloud and Snowflake Cortex to power the **Ask dbt** chatbot.
+
+1. In dbt Cloud, browse to your Semantic Layer configurations.
+ 1. From the gear menu, select **Account settings**. In the left sidebar, select **Projects** and choose your dbt project from the project list.
+ 1. In the **Project details** panel, click the **Edit Semantic Layer Configuration** link (which is below the **GraphQL URL** option).
+1. In the **Semantic Layer Configuration Details** panel, identify the Snowflake credentials (which you'll use to access Snowflake Cortex) and the environment against which the Semantic Layer is run. Save the username, role, and the environment in a temporary location to use later on.
+
+
+
+1. Identify the default database the environment is connecting to.
+ 1. Select **Deploy > Environments** from the top navigation bar. From the environments list, select the one that was identified in the **Semantic Layer Configuration Details** panel.
+ 1. On the environment's page, click **Settings**. Scroll to the section **Deployment connection**. The listed database is the default for your environment and is also where you will create the schema. Save this information in a temporary location to use later on.
+
+1. In Snowflake, verify that your SL user has been granted permission to use Snowflake Cortex. This user must have the ability to read and write into this schema to create the Retrieval Augmented Generation (RAG). For more information, refer to [Required Privileges](https://docs.snowflake.com/en/user-guide/snowflake-cortex/llm-functions#required-privileges) in the Snowflake docs.
+
+ By default, all users should have access to Snowflake Cortex. If this is disabled for you, open a Snowflake SQL worksheet and run these statements:
+
+ ```sql
+ create role cortex_user_role;
+ grant database role SNOWFLAKE.CORTEX_USER to role cortex_user_role;
+ grant role cortex_user_role to user SL_USER;
+ ```
+
+ Make sure to replace `SNOWFLAKE.CORTEX_USER` and `SL_USER` with the appropriate strings for your environment.
+
+1. Create a schema `dbt_sl_llm` in the deployment database. Open a Snowflake SQL worksheet and run these statements:
+
+ ```sql
+ create schema YOUR_DEPLOYMENT_DATABASE.dbt_sl_llm;
+ grant ownership on schema dbt_sl_llm to role SL_ROLE;
+ ```
+
+ Make sure to replace `YOUR_DEPLOYMENT_DATABASE` and `SL_USER` with the appropriate strings for your environment.
+
+## Configure dbt Cloud
+Collect three pieces of information from dbt Cloud to set up the application.
+
+1. From the gear menu in dbt Cloud, select **Account settings**. In the left sidebar, select **API tokens > Service tokens**. Create a service token with access to all the projects you want to access in the dbt Snowflake Native App. Grant these permission sets:
+ - **Job Admin**
+ - **Metadata Only**
+ - **Semantic Layer Only**
+
+ Make sure to save the token information in a temporary location to use later during Native App configuration.
+
+ The following is an example of granting the permission sets to all projects:
+
+
+
+1. From the left sidebar, select **Account** and save this information in a temporary location to use later during Native App configuration:
+ - **Account ID** — A numerical string representing your dbt Cloud account.
+ - **Access URL** — If you have a North America multi-tenant account, use `cloud.getdbt.com` as the access URL. For all other regions, refer to [Access, Regions, & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses) and look up the access URL you should use in the table.
+
+## Install the dbt Snowflake Native App
+1. Browse to the listing for the dbt Snowflake Native App:
+ - **Private listing** (recommended) — Use the link from the email sent to you.
+ - **Public listing** — Navigate to the [Snowflake Marketplace](https://app.snowflake.com/marketplace/listing/GZTYZSRT2R3).
+1. Click **Get** on the listing to install the dbt Snowflake Native App. This can take several minutes. When installation is complete, an email is sent to you.
+
+ A message will appear asking if you want to change the application and grant access to the warehouse for installation. dbt Labs strongly recommends not changing the application name unless necessary.
+1. When the dbt Snowflake Native App is successfully installed, click **Configure** in the modal window.
+
+## Configure the dbt Snowflake Native App
+
+1. On the **Activate dbt** page, click **Grant** in **Step 1: Grant Account Privileges**.
+1. When privileges have been successfully granted, click **Review** in **Step 2: Allow Connections**.
+
+ Walk through the **Connect to dbt Cloud External Access Integration** steps. You will need your dbt Cloud account information that you collected earlier. Enter your account ID, access URL, and API service token as the **Secret value** when prompted.
+1. On the **Activate dbt** page, click **Activate** when you've established a successful connection to the dbt Cloud External Access Integration. It can take a few minutes to spin up the required Snowflake services and compute resources.
+1. When activation is complete, select the **Telemetry** tab and enable the option to share your `INFO` logs. The option might take some time to display. This is because Snowflake needs to create the events table so it can be shared.
+1. When the option is successfully enabled, click **Launch app**. Then, log in to the app with your Snowflake credentials.
+
+ If it redirects you to a Snowsight worksheet (instead of the login page), that means the app hasn't finished installing. You can resolve this issue, typically, by refreshing the page.
+
+ The following is an example of the dbt Snowflake Native App after configuration:
+
+
+
+## Verify the app installed successfully
+
+To verify the app installed successfully, select any of the following from the sidebar:
+
+- **Explore** — Launch dbt Explorer and make sure you can access your dbt project information.
+- **Jobs** — Review the run history of the dbt jobs.
+- **Ask dbt** — Click on any of the suggested prompts to ask the chatbot a question. Depending on the number of metrics that's defined for the dbt project, it can take several minutes to load **Ask dbt** the first time because dbt is building the RAG. Subsequent launches will load faster.
+
+
+The following is an example of the **Ask dbt** chatbot with the suggested prompts near the top:
+
+
+
+
+## Onboard new users
+1. From the sidebar in Snowflake, select **Data Products > Apps**. Choose **dbt** from the list to open the app's configuration page. Then, click **Manage access** (in the upper right) to onboard new users to the application. Grant the **APP_USER** role to the appropriate roles that should have access to the application but not the ability to edit the configurations. Grant **APP_ADMIN** to roles that should have access to edit or remove the configurations.
+
+1. New users can access the app with either the Snowflake app URL that's been shared with them, or by clicking **Launch app** from the app's configuration page.
+
+
+## FAQs
+
+
+
+Check that the SL user has been granted access to the `dbt_sl_llm` schema and make sure they have all the necessary permissions to read and write from the schema.
+
+
+
+
+
+If there's been an update to the dbt Cloud account ID, access URL, or API service token, you need to update the configuration for the dbt Snowflake Native App. In Snowflake, navigate to the app's configuration page and delete the existing configurations. Add the new configuration and then run `CALL app_public.restart_ap ();` in the application database in Snowsight.
+
diff --git a/website/docs/docs/cloud/about-cloud-setup.md b/website/docs/docs/cloud/about-cloud-setup.md
index 94bb523aefa..48e0344bc5f 100644
--- a/website/docs/docs/cloud/about-cloud-setup.md
+++ b/website/docs/docs/cloud/about-cloud-setup.md
@@ -18,7 +18,7 @@ This portion of our documentation will take you through the various settings in
For steps on installing dbt Cloud development tools, refer to the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or the [dbt Cloud IDE (browser-based)](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud).
-These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/guides). If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn online courses site](https://courses.getdbt.com/).
+These settings are intended for dbt Cloud administrators. If you need a more detailed first-time setup guide for specific data platforms, read our [quickstart guides](/guides). If you want a more in-depth learning experience, we recommend taking the dbt Fundamentals on our [dbt Learn site](https://learn.getdbt.com/).
## Prerequisites
diff --git a/website/docs/docs/core/connect-data-platform/bigquery-setup.md b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
index 8238bc043c4..eedc3646f89 100644
--- a/website/docs/docs/core/connect-data-platform/bigquery-setup.md
+++ b/website/docs/docs/core/connect-data-platform/bigquery-setup.md
@@ -388,8 +388,6 @@ my-profile:
execution_project: buck-stops-here-456
```
-
-
### Running Python models on Dataproc
To run dbt Python models on GCP, dbt uses companion services, Dataproc and Cloud Storage, that offer tight integrations with BigQuery. You may use an existing Dataproc cluster and Cloud Storage bucket, or create new ones:
@@ -447,7 +445,6 @@ my-profile:
For a full list of possible configuration fields that can be passed in `dataproc_batch`, refer to the [Dataproc Serverless Batch](https://cloud.google.com/dataproc-serverless/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.Batch) documentation.
-
## Required permissions
diff --git a/website/docs/docs/core/connect-data-platform/connection-profiles.md b/website/docs/docs/core/connect-data-platform/connection-profiles.md
index 7cca5ae751b..7feec032c21 100644
--- a/website/docs/docs/core/connect-data-platform/connection-profiles.md
+++ b/website/docs/docs/core/connect-data-platform/connection-profiles.md
@@ -19,12 +19,8 @@ profile: 'jaffle_shop'
dbt then checks your [`profiles.yml` file](/docs/core/connect-data-platform/profiles.yml) for a profile with the same name. A profile contains all the details required to connect to your data warehouse.
-
-
dbt will search the current working directory for the `profiles.yml` file and will default to the `~/.dbt/` directory if not found.
-
-
This file generally lives outside of your dbt project to avoid sensitive credentials being checked in to version control, but `profiles.yml` can be safely checked in when [using environment variables](#advanced-using-environment-variables) to load sensitive credentials.
@@ -135,15 +131,11 @@ For more information, check out [using threads](/docs/running-a-dbt-project/usin
The parent directory for `profiles.yml` is determined using the following precedence:
-
-
1. `--profiles-dir` option
1. `DBT_PROFILES_DIR` environment variable
1. current working directory
1. `~/.dbt/` directory
-
-
To check the expected location of your `profiles.yml` file for your installation of dbt, you can run the following:
```bash
diff --git a/website/docs/docs/core/connect-data-platform/snowflake-setup.md b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
index 510f2f27791..24ad6f88d62 100644
--- a/website/docs/docs/core/connect-data-platform/snowflake-setup.md
+++ b/website/docs/docs/core/connect-data-platform/snowflake-setup.md
@@ -56,7 +56,7 @@ my-snowflake-db:
connect_timeout: 10 # default: 10
retry_on_database_errors: False # default: false
retry_all: False # default: false
- reuse_connections: False # default: false (available v1.4+)
+ reuse_connections: False # default: false
```
@@ -91,7 +91,7 @@ my-snowflake-db:
connect_timeout: 10 # default: 10
retry_on_database_errors: False # default: false
retry_all: False # default: false
- reuse_connections: False # default: false (available v1.4+)
+ reuse_connections: False # default: false
```
Along with adding the `authenticator` parameter, be sure to run `alter account set allow_client_mfa_caching = true;` in your Snowflake warehouse. Together, these will allow you to easily verify authentication with the DUO Mobile app (skipping this results in push notifications for every model built on every `dbt run`).
@@ -263,14 +263,10 @@ The `client_session_keep_alive` feature is intended to keep Snowflake sessions a
[Query tags](https://docs.snowflake.com/en/sql-reference/parameters.html#query-tag) are a Snowflake
parameter that can be quite useful later on when searching in the [QUERY_HISTORY view](https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html).
-
-
### reuse_connections
During node execution (such as model and test), dbt opens connections against a Snowflake warehouse. Setting this configuration to `True` reduces execution time by verifying credentials only once for each thread.
-
-
### retry_on_database_errors
The `retry_on_database_errors` flag along with the `connect_retries` count specification is intended to make retries configurable after the snowflake connector encounters errors of type snowflake.connector.errors.DatabaseError. These retries can be helpful for handling errors of type "JWT token is invalid" when using key pair authentication.
diff --git a/website/docs/docs/dbt-versions/2023-release-notes.md b/website/docs/docs/dbt-versions/2023-release-notes.md
index 3ffbebbd161..dfc4758a6d6 100644
--- a/website/docs/docs/dbt-versions/2023-release-notes.md
+++ b/website/docs/docs/dbt-versions/2023-release-notes.md
@@ -201,7 +201,7 @@ Archived release notes for dbt Cloud from 2023
- Continuous integration jobs are now generally available and no longer in beta!
- Added [Postgres PrivateLink set up page](/docs/cloud/secure/postgres-privatelink)
- Published beta docs for [dbt Explorer](/docs/collaborate/explore-projects).
- - Added a new Semantic Layer [GraphQL API doc](/docs/dbt-cloud-apis/sl-graphql) and updated the [integration docs](/docs/use-dbt-semantic-layer/avail-sl-integrations) to include Hex. Responded to dbt community feedback and clarified Metricflow use cases for dbt Core and dbt Cloud.
+ - Added a new Semantic Layer [GraphQL API doc](/docs/dbt-cloud-apis/sl-graphql) and updated the [integration docs](/docs/cloud-integrations/avail-sl-integrations) to include Hex. Responded to dbt community feedback and clarified Metricflow use cases for dbt Core and dbt Cloud.
- Added an [FAQ](/faqs/Git/git-migration) describing how to migrate from one git provider to another in dbt Cloud.
- Clarified an example and added a [troubleshooting section](/docs/cloud/connect-data-platform/connect-snowflake#troubleshooting) to Snowflake connection docs to address common errors and provide solutions.
@@ -229,7 +229,7 @@ Archived release notes for dbt Cloud from 2023
It aims to bring the best of modeling and semantics to downstream applications by introducing:
- - Brand new [integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) such as Tableau, Google Sheets, Hex, Mode, and Lightdash.
+ - Brand new [integrations](/docs/cloud-integrations/avail-sl-integrations) such as Tableau, Google Sheets, Hex, Mode, and Lightdash.
- New [Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) using GraphQL and JDBC to query metrics and build integrations.
- dbt Cloud [multi-tenant regional](/docs/cloud/about-cloud/access-regions-ip-addresses) support for North America, EMEA, and APAC. Single-tenant support coming soon.
- Coming soon — Schedule exports (a way to build tables in your data platform) as part of your dbt Cloud job. Use the APIs to call an export, then access them in your preferred BI tool.
diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md
index 2119559c8ad..a0e42c6e068 100644
--- a/website/docs/docs/dbt-versions/release-notes.md
+++ b/website/docs/docs/dbt-versions/release-notes.md
@@ -18,6 +18,10 @@ Release notes are grouped by month for both multi-tenant and virtual private clo
[^*] The official release date for this new format of release notes is May 15th, 2024. Historical release notes for prior dates may not reflect all available features released earlier this year or their tenancy availability.
+## June 2024
+
+- **New:** A [preview](/docs/dbt-versions/product-lifecycles#dbt-cloud) of the dbt Snowflake Native App is now available. With this app, you can access dbt Explorer, the **Ask dbt** chatbot, and orchestration observability features, extending your dbt Cloud experience into the Snowflake UI. To learn more, check out [About the dbt Snowflake Native App](/docs/cloud-integrations/snowflake-native-app) and [Set up the dbt Snowflake Native App](/docs/cloud-integrations/set-up-snowflake-native-app).
+
## May 2024
- **Enhancement:** We've now introduced a new **Prune branches** [Git button](/docs/cloud/dbt-cloud-ide/ide-user-interface#prune-branches-modal) in the dbt Cloud IDE. This button allows you to delete local branches that have been deleted from the remote repository, keeping your branch management tidy. Available in all regions now and will be released to single tenant accounts during the next release cycle.
@@ -77,7 +81,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **New:** dbt Mesh is now GA to dbt Cloud Enterprise users. dbt Mesh is a framework that helps organizations scale their teams and data assets effectively. It promotes governance best practices and breaks large projects into manageable sections. Get started with dbt Mesh by reading the [dbt Mesh quickstart guide](https://docs.getdbt.com/guides/mesh-qs?step=1).
-- **New:** The dbt Semantic Layer [Tableau Desktop, Tableau Server](/docs/use-dbt-semantic-layer/tableau), and [Google Sheets integration](/docs/use-dbt-semantic-layer/gsheets) is now GA to dbt Cloud Team or Enterprise accounts. These first-class integrations allow you to query and unlock valuable insights from your data ecosystem.
+- **New:** The dbt Semantic Layer [Tableau Desktop, Tableau Server](/docs/cloud-integrations/semantic-layer/tableau), and [Google Sheets integration](/docs/cloud-integrations/semantic-layer/gsheets) is now GA to dbt Cloud Team or Enterprise accounts. These first-class integrations allow you to query and unlock valuable insights from your data ecosystem.
- **Enhancement:** As part of our ongoing commitment to improving the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud#considerations), the filesystem now comes with improvements to speed up dbt development, such as introducing a Git repository limit of 10GB.
@@ -114,16 +118,16 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **Behavior change:** Introduced the `require_explicit_package_overrides_for_builtin_materializations` flag, opt-in and disabled by default. If set to `True`, dbt will only use built-in materializations defined in the root project or within dbt, rather than implementations in packages. This will become the default in May 2024 (dbt Core v1.8 and dbt Cloud "Keep on latest version"). Read [Package override for built-in materialization](/reference/global-configs/legacy-behaviors#package-override-for-built-in-materialization) for more information.
**dbt Semantic Layer**
-- **New**: Use Saved selections to [save your query selections](/docs/use-dbt-semantic-layer/gsheets#using-saved-selections) within the [Google Sheets application](/docs/use-dbt-semantic-layer/gsheets). They can be made private or public and refresh upon loading.
+- **New**: Use Saved selections to [save your query selections](/docs/cloud-integrations/semantic-layer/gsheets#using-saved-selections) within the [Google Sheets application](/docs/cloud-integrations/semantic-layer/gsheets). They can be made private or public and refresh upon loading.
- **New**: Metrics are now displayed by their labels as `metric_name`.
- **Enhancement**: [Metrics](/docs/build/metrics-overview) now supports the [`meta` option](/reference/resource-configs/meta) under the [config](/reference/resource-properties/config) property. Previously, we only supported the now deprecated `meta` tag.
-- **Enhancement**: In the Google Sheets application, we added [support](/docs/use-dbt-semantic-layer/gsheets#using-saved-queries) to allow jumping off from or exploring MetricFlow-defined saved queries directly.
+- **Enhancement**: In the Google Sheets application, we added [support](/docs/cloud-integrations/semantic-layer/gsheets#using-saved-queries) to allow jumping off from or exploring MetricFlow-defined saved queries directly.
- **Enhancement**: In the Google Sheets application, we added support to query dimensions without metrics. Previously, you needed a dimension.
- **Enhancement**: In the Google Sheets application, we added support for time presets and complex time range filters such as "between", "after", and "before".
- **Enhancement**: In the Google Sheets application, we added supported to automatically populate dimension values when you select a "where" filter, removing the need to manually type them. Previously, you needed to manually type the dimension values.
- **Enhancement**: In the Google Sheets application, we added support to directly query entities, expanding the flexibility of data requests.
- **Enhancement**: In the Google Sheets application, we added an option to exclude column headers, which is useful for populating templates with only the required data.
-- **Deprecation**: For the Tableau integration, the [`METRICS_AND_DIMENSIONS` data source](/docs/use-dbt-semantic-layer/tableau#using-the-integration) has been deprecated for all accounts not actively using it. We encourage users to transition to the "ALL" data source for future integrations.
+- **Deprecation**: For the Tableau integration, the [`METRICS_AND_DIMENSIONS` data source](/docs/cloud-integrations/semantic-layer/tableau#using-the-integration) has been deprecated for all accounts not actively using it. We encourage users to transition to the "ALL" data source for future integrations.
## March 2024
@@ -140,7 +144,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **New:** [Exports](/docs/use-dbt-semantic-layer/exports#define-exports) allow you to materialize a saved query as a table or view in your data platform. By using exports, you can unify metric definitions in your data platform and query them as you would any other table or view.
- **New:** You can access a list of your [exports](/docs/use-dbt-semantic-layer/exports) with the new list saved-queries command by adding `--show-exports`
-- **New:** The dbt Semantic Layer and [Tableau Connector](/docs/use-dbt-semantic-layer/tableau) now supports relative date filters in Tableau.
+- **New:** The dbt Semantic Layer and [Tableau Connector](/docs/cloud-integrations/semantic-layer/tableau) now supports relative date filters in Tableau.
-
@@ -213,7 +217,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
-- **Enhancement:** The dbt Semantic Layer [Google Sheets integration](/docs/use-dbt-semantic-layer/gsheets) now exposes a note on the cell where the data was requested, indicating clearer data requests. The integration also now exposes a new **Time Range** option, which allows you to quickly select date ranges.
+- **Enhancement:** The dbt Semantic Layer [Google Sheets integration](/docs/cloud-integrations/semantic-layer/gsheets) now exposes a note on the cell where the data was requested, indicating clearer data requests. The integration also now exposes a new **Time Range** option, which allows you to quickly select date ranges.
- **Enhancement:** The [GraphQL API](/docs/dbt-cloud-apis/sl-graphql) includes a `requiresMetricTime` parameter to better handle metrics that must be grouped by time. (Certain metrics defined in MetricFlow can't be looked at without a time dimension).
- **Enhancement:** Enable querying metrics with offset and cumulative metrics with the time dimension name, instead of `metric_time`. [Issue #1000](https://github.com/dbt-labs/metricflow/issues/1000)
- Enable querying `metric_time` without metrics. [Issue #928](https://github.com/dbt-labs/metricflow/issues/928)
@@ -267,7 +271,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **New:** New metric type that allows you to measure conversion events. For example, users who viewed a web page and then filled out a form. For more details, refer to [Conversion metrics](/docs/build/conversion).
- **New:** Instead of specifying the fully qualified dimension name (for example, `order__user__country`) in the group by or filter expression, you now only need to provide the primary entity and dimensions name, like `user__county`.
-- **New:** You can now query the [saved queries](/docs/build/saved-queries) you've defined in the dbt Semantic Layer using [Tableau](/docs/use-dbt-semantic-layer/tableau), [GraphQL API](/docs/dbt-cloud-apis/sl-graphql), [JDBC API](docs/dbt-cloud-apis/sl-jdbc), and the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation).
+- **New:** You can now query the [saved queries](/docs/build/saved-queries) you've defined in the dbt Semantic Layer using [Tableau](/docs/cloud-integrations/semantic-layer/tableau), [GraphQL API](/docs/dbt-cloud-apis/sl-graphql), [JDBC API](docs/dbt-cloud-apis/sl-jdbc), and the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation).
-
@@ -281,7 +285,7 @@ The following features are new or enhanced as part of our [dbt Cloud Launch Show
- **Enhancement:** The YAML spec parameter `label` is now available for Semantic Layer metrics in [JDBC and GraphQL APIs](/docs/dbt-cloud-apis/sl-api-overview). This means you can conveniently use `label` as a display name for your metrics when exposing them.
- **Enhancement:** Added support for `create_metric: true` for a measure, which is a shorthand to quickly create metrics. This is useful in cases when metrics are only used to build other metrics.
-- **Enhancement:** Added support for Tableau parameter filters. You can use the [Tableau connector](docs/use-dbt-semantic-layer/tableau) to create and use parameters with your dbt Semantic Layer data.
+- **Enhancement:** Added support for Tableau parameter filters. You can use the [Tableau connector](docs/cloud-integrations/semantic-layer/tableau) to create and use parameters with your dbt Semantic Layer data.
- **Enhancement:** Added support to expose `expr` and `agg` for [Measures](/docs/build/measures) in the [GraphQL API](/docs/dbt-cloud-apis/sl-graphql).
- **Enhancement:** You have improved error messages in the command line interface when querying a dimension that is not reachable for a given metric.
- **Enhancement:** You can now query entities using our Tableau integration (similar to querying dimensions).
diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md
index e036444c304..754f5e9c960 100644
--- a/website/docs/docs/deploy/webhooks.md
+++ b/website/docs/docs/deploy/webhooks.md
@@ -21,9 +21,9 @@ dbt Cloud retries sending each event five times. dbt Cloud keeps a log of each w
A webhook in dbt Cloud has a timeout of 10 seconds. This means that if the endpoint doesn't respond within 10 seconds, the webhook processor will time out. This can result in a situation where the client responds successfully after the 10 second timeout and records a success status while the dbt cloud webhooks system will interpret this as a failure.
:::tip Videos
-If you're interested in course learning with videos, check out the [Webhooks on-demand course](https://courses.getdbt.com/courses/webhooks) from dbt Labs.
+If you're interested in course learning with videos, check out the [Webhooks on-demand course](https://learn.getdbt.com/courses/webhooks) from dbt Labs.
-You can also check out the free [dbt Fundamentals course](https://courses.getdbt.com/courses/fundamentals).
+You can also check out the free [dbt Fundamentals course](https://learn.getdbt.com/courses/dbt-fundamentals).
:::
## Prerequisites
diff --git a/website/docs/docs/get-started-dbt.md b/website/docs/docs/get-started-dbt.md
index 5dd5a705397..b040fb2bb24 100644
--- a/website/docs/docs/get-started-dbt.md
+++ b/website/docs/docs/get-started-dbt.md
@@ -82,5 +82,5 @@ Expand your dbt knowledge and expertise with these additional resources:
- [Join the bi-weekly demos](https://www.getdbt.com/resources/webinars/dbt-cloud-demos-with-experts) to see dbt Cloud in action and ask questions.
- [dbt Cloud AWS marketplace](https://aws.amazon.com/marketplace/pp/prodview-tjpcf42nbnhko) contains information on how to deploy dbt Cloud on AWS, user reviews, and more.
- [Best practices](https://docs.getdbt.com/best-practices) contains information on how dbt Labs approaches building projects through our current viewpoints on structure, style, and setup.
-- [dbt Learn](https://courses.getdbt.com/collections) offers free online courses that cover dbt fundamentals, advanced topics, and more.
+- [dbt Learn](https://learn.getdbt.com) offers free online courses that cover dbt fundamentals, advanced topics, and more.
- [Join the dbt Community](https://www.getdbt.com/community/join-the-community) to learn how other data practitioners globally are using dbt, share your own experiences, and get help with your dbt projects.
diff --git a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
index 55ca01323df..f75ae8fd15d 100644
--- a/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
+++ b/website/docs/docs/use-dbt-semantic-layer/dbt-sl.md
@@ -74,7 +74,7 @@ plan="dbt Cloud Team or Enterprise"
@@ -233,7 +233,7 @@ Yes, while [entities](/docs/build/entities) must be defined under “entities,
There are a number of data applications that have integrations with the dbt Semantic Layer, including Tableau, Google Sheets, Hex, and Mode, among others.
-Refer to [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) for more information.
+Refer to [Available integrations](/docs/cloud-integrations/avail-sl-integrations) for more information.
diff --git a/website/docs/faqs/Git/gitignore.md b/website/docs/faqs/Git/gitignore.md
index 6bda9611733..8d966c40e2c 100644
--- a/website/docs/faqs/Git/gitignore.md
+++ b/website/docs/faqs/Git/gitignore.md
@@ -123,111 +123,4 @@ dbt_modules/
-
-
-1. Launch the Cloud IDE into the project that is being fixed, by selecting **Develop** on the menu bar.
-2. In your **File Explorer**, check to see if a `.gitignore` file exists at the root of your dbt project folder. If it doesn't exist, create a new file.
-3. Open the new or existing `gitignore` file, and add the following:
-
-```bash
-target/
-dbt_packages/
-logs/
-# legacy -- renamed to dbt_packages in dbt v1
-dbt_modules/
-```
-
- * **Note** — You can place these lines anywhere in the file, as long as they're on separate lines. The lines shown are wildcards that will include all nested file and folders. Avoid adding a trailing `'*'` to the lines, such as `target/*`.
-
-For more info on `gitignore` syntax, refer to the [Git docs](https://git-scm.com/docs/gitignore).
-
-4. Save the changes but _don't commit_.
-5. Restart the IDE by clicking on the three dots next to the **IDE Status button** on the lower right corner of the IDE screen and select **Restart IDE**.
-
-
-
-6. Once the IDE restarts, go to the **File Explorer** to delete the following files or folders (if they exist). No data will be lost:
- * `target`, `dbt_modules`, `dbt_packages`, `logs`
-7. **Save** and then **Commit and sync** the changes.
-8. Restart the IDE again using the same procedure as step 5.
-9. Once the IDE restarts, use the 'Create a pull request' (PR) button under the **Version Control** menu to start the process of integrating the changes.
-10. When the git provider's website opens to a page with the new PR, follow the necessary steps to compelete and merge the PR into the main branch of that repository.
-
- * **Note** — The 'main' branch might also be called 'master', 'dev', 'qa', 'prod', or something else depending on the organizational naming conventions. The goal is to merge these changes into the root branch that all other development branches are created from.
-
-11. Return to the dbt Cloud IDE and use the **Change Branch** button to switch to the main branch of the project.
-12. Once the branch has changed, click the **Pull from remote** button to pull in all the changes.
-13. Verify the changes by making sure the files/folders in the `.gitignore `file are in italics.
-
-
-
-
-### Fix in the git provider
-
-Sometimes it's necessary to use the git providers web interface to fix a broken `.gitignore` file. Although the specific steps may vary across providers, the general process remains the same.
-
-There are two options for this approach: editing the main branch directly if allowed, or creating a pull request to implement the changes if required:
-
-
-
-
-
-When permissions allow it, it's possible to edit the `.gitignore` directly on the main branch of your repo. Here are the following steps:
-
-1. Go to your repository's web interface.
-2. Switch to the main branch, and the root directory of your dbt project.
-3. Find the `.gitignore` file. Create a blank one if it doesn't exist.
-4. Edit the file in the web interface, adding the following entries:
-```bash
-target/
-dbt_packages/
-logs/
-# legacy -- renamed to dbt_packages in dbt v1
-dbt_modules/
-```
-5. Commit (save) the file.
-6. Delete the following folders from the dbt project root, if they exist. No data or code will be lost:
- * `target`, `dbt_modules`, `dbt_packages`, `logs`
-7. Commit (save) the deletions to the main branch.
-8. Switch to the dbt Cloud IDE, and open the project that you're fixing.
-9. Reclone your repo in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Reclone Repo**.
- * **Note** — Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud.
-10. Once you reclone the repo, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch.
-11. Go to the **File Explorer** to verify the `.gitignore` file contains the correct entries and make sure the untracked files/folders in the .gitignore file are in *italics*.
-12. Great job 🎉! You've configured the `.gitignore` correctly and can continue with your development!
-
-
-
-
-If you can't edit the `.gitignore` directly on the main branch of your repo, follow these steps:
-
-1. Go to your repository's web interface.
-2. Switch to an existing development branch, or create a new branch just for these changes (This is often faster and cleaner).
-3. Find the `.gitignore` file. Create a blank one if it doesn't exist.
-4. Edit the file in the web interface, adding the following entries:
-```bash
-target/
-dbt_packages/
-logs/
-# legacy -- renamed to dbt_packages in dbt v1
-dbt_modules/
-```
-5. Commit (save) the file.
-6. Delete the following folders from the dbt project root, if they exist. No data or code will be lost:
- * `target`, `dbt_modules`, `dbt_packages`, `logs`
-7. Commit (save) the deleted folders.
-8. Open a merge request using the git provider web interface. The merge request should be attempting to merge the changes into the 'main' branch that all development branches are created from.
-9. Follow the necessary procedures to get the branch approved and merged into the 'main' branch. You can delete the branch after the merge is complete.
-10. Once the merge is complete, go back to the dbt Cloud IDE, and open the project that you're fixing.
-11. Reclone your repo in the IDE by clicking on the three dots next to the **IDE Status** button on the lower right corner of the IDE screen, then select **Reclone Repo**.
- * **Note** — Any saved but uncommitted changes will be lost, so make sure you copy any modified code that you want to keep in a temporary location outside of dbt Cloud.
-12. Once you reclone the repo, open the `.gitignore` file in the branch you're working in. If the new changes aren't included, you'll need to merge the latest commits from the main branch into your working branch.
-13. Go to the **File Explorer** to verify the `.gitignore` file contains the correct entries and make sure the untracked files/folders in the .gitignore file are in *italics*.
-14. Great job 🎉! You've configured the `.gitignore` correctly and can continue with your development!
-
-
-
-
-
-
For more info, refer to this [detailed video](https://www.loom.com/share/9b3b8e2b617f41a8bad76ec7e42dd014) for additional guidance.
diff --git a/website/docs/faqs/Project/why-version-2.md b/website/docs/faqs/Project/why-version-2.md
index dc7e409716e..b4e91d6a773 100644
--- a/website/docs/faqs/Project/why-version-2.md
+++ b/website/docs/faqs/Project/why-version-2.md
@@ -14,10 +14,3 @@ Resource yml files do not currently require this config. We only support `versio
-
-
-Once upon a time, the structure of these `.yml` files was very different (s/o to anyone who was using dbt back then!). Adding `version: 2` allowed us to make this structure more extensible.
-
-Currently, Version 2 is the only supported version for these files. We kept `version:` around as a required key so that in the future, if we need to introduce a new structure for these files, we'll be able to do this more easily.
-
-
diff --git a/website/docs/faqs/Troubleshooting/gitignore.md b/website/docs/faqs/Troubleshooting/gitignore.md
index 59fd4e8c866..6ab217ebf07 100644
--- a/website/docs/faqs/Troubleshooting/gitignore.md
+++ b/website/docs/faqs/Troubleshooting/gitignore.md
@@ -38,49 +38,6 @@ dbt_modules/
-
-
-
-
-1. In the dbt Cloud IDE, add the following [.gitignore contents](https://github.com/dbt-labs/dbt-starter-project/blob/main/.gitignore) in your dbt project `.gitignore` file:
-```bash
-target/
-dbt_packages/
-logs/
-# legacy -- renamed to dbt_packages in dbt v1
-dbt_modules/
-```
-2. Go to your `dbt_project.yml` file and add `tmp/` after your `target-path:` and add `log-path: "tmp/logs"`.
- * So it should look like: `target-path: "tmp/target"` and `log-path: "tmp/logs"`:
-
-
-
-3. Save your changes but _don't commit_.
-4. Restart the IDE by clicking on the three dots next to the **IDE Status button** on the lower right of the IDE.
-
-
-
-5. Select **Restart IDE**.
-6. Go back to your dbt project and delete the following four folders (if you have them):
- * `target`
- * `dbt_modules`
- * `dbt_packages`
- * `logs`
-7. **Save** and then **Commit and sync** your changes.
-8. Go back to your `dbt_project.yml` file and undo the modifications you made in **Step 2**.
-
- * Remove `tmp` from your `target-path` and completely remove the `log-path: "tmp/logs"` line.
-
-
-
-9. Restart the IDE again.
-10. Delete the `tmp` folder in the **File Explorer**.
-11. Create a pull request (PR) under the **Version Control** menu to integrate your new changes.
-12. Merge the PR in your git provider page.
-13. Switch to your main branch and click on **Pull from remote** to pull in all the changes you made to your main branch. You can verify the changes by making sure the files/folders in the .gitignore file are in italics.
-
-
-
For more info, refer to this [detailed video](https://www.loom.com/share/9b3b8e2b617f41a8bad76ec7e42dd014) for additional guidance.
diff --git a/website/docs/faqs/Troubleshooting/sl-alpn-error.md b/website/docs/faqs/Troubleshooting/sl-alpn-error.md
index f588d690fac..739f0572dec 100644
--- a/website/docs/faqs/Troubleshooting/sl-alpn-error.md
+++ b/website/docs/faqs/Troubleshooting/sl-alpn-error.md
@@ -4,7 +4,7 @@ description: "To resolve the 'Failed ALPN' error in the dbt Semantic Layer, crea
sidebar_label: 'Use SSL exception to resolve `Failed ALPN` error'
---
-If you're receiving a `Failed ALPN` error when trying to connect the dbt Semantic Layer with the various [data integration tools](/docs/use-dbt-semantic-layer/avail-sl-integrations) (such as Tableau, DBeaver, Datagrip, ADBC, or JDBC), it typically happens when connecting from a computer behind a corporate VPN or Proxy (like Zscaler or Check Point).
+If you're receiving a `Failed ALPN` error when trying to connect the dbt Semantic Layer with the various [data integration tools](/docs/cloud-integrations/avail-sl-integrations) (such as Tableau, DBeaver, Datagrip, ADBC, or JDBC), it typically happens when connecting from a computer behind a corporate VPN or Proxy (like Zscaler or Check Point).
The root cause is typically the proxy interfering with the TLS handshake as the dbt Semantic Layer uses gRPC/HTTP2 for connectivity. To resolve this:
diff --git a/website/docs/guides/adapter-creation.md b/website/docs/guides/adapter-creation.md
index 72b20d641cc..20d7bae26e2 100644
--- a/website/docs/guides/adapter-creation.md
+++ b/website/docs/guides/adapter-creation.md
@@ -118,7 +118,7 @@ The more you can answer Yes to the below questions, the easier your adapter deve
### Training
-- the developer (and any product managers) ideally will have substantial experience as an end-user of dbt. If not, it is highly advised that you at least take the [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) and [Advanced Materializations](https://courses.getdbt.com/courses/advanced-materializations) course.
+- The developer (and any product managers) ideally will have substantial experience as an end-user of dbt. If not, it is highly advised that you at least take the [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) and [Advanced Materializations](https://learn.getdbt.com/courses/advanced-materializations) course.
### Database
diff --git a/website/docs/guides/azure-synapse-analytics-qs.md b/website/docs/guides/azure-synapse-analytics-qs.md
index 1b2244f6fab..052127a9bd8 100644
--- a/website/docs/guides/azure-synapse-analytics-qs.md
+++ b/website/docs/guides/azure-synapse-analytics-qs.md
@@ -28,7 +28,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with [Azure Synapse
- As a Microsoft admin, you’ve enabled service principal authentication. You must add the service principal to the Synapse workspace with either a Member (recommended) or Admin permission set. For details, refer to [Create a service principal using the Azure portal](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal) in the Microsoft docs. dbt Cloud needs these authentication credentials to connect to Azure Synapse Analytics.
### Related content
-- [dbt Courses](https://courses.getdbt.com/collections/beginner)
+- [dbt Learn courses](https://learn.getdbt.com)
- [About continuous integration jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [Job notifications](/docs/deploy/job-notifications)
diff --git a/website/docs/guides/bigquery-qs.md b/website/docs/guides/bigquery-qs.md
index 7256e63e911..1ba5f7b0021 100644
--- a/website/docs/guides/bigquery-qs.md
+++ b/website/docs/guides/bigquery-qs.md
@@ -24,7 +24,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with BigQuery. It wi
- Schedule a job to run.
:::tip Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
:::
### Prerequisites
@@ -35,7 +35,7 @@ You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamen
### Related content
-- Learn more with [dbt Courses](https://courses.getdbt.com/collections)
+- Learn more with [dbt Learn courses](https://learn.getdbt.com)
- [CI jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [Job notifications](/docs/deploy/job-notifications)
diff --git a/website/docs/guides/codespace-qs.md b/website/docs/guides/codespace-qs.md
index 55cbad14a02..5f874661313 100644
--- a/website/docs/guides/codespace-qs.md
+++ b/website/docs/guides/codespace-qs.md
@@ -27,7 +27,7 @@ dbt Labs provides a [GitHub Codespace](https://docs.github.com/en/codespaces/ove
- [Build your first models](/guides/manual-install?step=3)
- [Test and document your project](/guides/manual-install?step=4)
- [Schedule a job](/guides/manual-install?step=5)
-- Learn more with [dbt Courses](https://courses.getdbt.com/collections)
+- Learn more with [dbt Learn courses](https://learn.getdbt.com)
## Create a codespace
diff --git a/website/docs/guides/core-cloud-2.md b/website/docs/guides/core-cloud-2.md
index 3c7c21663c4..a4683ddb6f8 100644
--- a/website/docs/guides/core-cloud-2.md
+++ b/website/docs/guides/core-cloud-2.md
@@ -154,12 +154,12 @@ Now that you’ve chosen dbt Cloud as your platform, you’ve unlocked the power
- **Audit logs:** Use [audit logs](/docs/cloud/manage-access/audit-log) to review actions performed by people in your organization. Audit logs contain audited user and system events in real time. You can even [export](/docs/cloud/manage-access/audit-log#exporting-logs) *all* the activity (beyond the 90 days you can view in dbt Cloud).
- **dbt Cloud APIs:** Use dbt Cloud's robust [APIs](/docs/dbt-cloud-apis/overview) to create, read, update, and delete (CRUD) projects/jobs/environments project. The [dbt Cloud Administrative API](/docs/dbt-cloud-apis/admin-cloud-api) and [Terraform provider](https://registry.terraform.io/providers/dbt-labs/dbtcloud/latest/docs/resources/job) facilitate programmatic access and configuration storage. While the [Discovery API](/docs/dbt-cloud-apis/discovery-api) offers extensive metadata querying capabilities, such as job data, model configurations, usage, and overall project health.
- **dbt Explorer**: Use [dbt Explorer](/docs/collaborate/explore-projects) to view your project's [resources](/docs/build/projects) (such as models, tests, and metrics) and their [lineage](https://docs.getdbt.com/terms/data-lineage) to gain a better understanding of its latest production state. (Once you have a successful job in a Production environment).
-- **dbt Semantic Layer:** The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) allows you to define universal metrics on top of your models that can then be queried in your [business intelligence (BI) tool](/docs/use-dbt-semantic-layer/avail-sl-integrations). This means no more inconsistent metrics — there’s now a centralized way to define these metrics and create visibility in every component of the data flow.
+- **dbt Semantic Layer:** The [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) allows you to define universal metrics on top of your models that can then be queried in your [business intelligence (BI) tool](/docs/cloud-integrations/avail-sl-integrations). This means no more inconsistent metrics — there’s now a centralized way to define these metrics and create visibility in every component of the data flow.
- **dbt Mesh:** Use [dbt Mesh](/best-practices/how-we-mesh/mesh-1-intro) to share data models across organizations, enabling data teams to collaborate on shared data models and leverage the work of other teams.
### Additional help
-- **dbt Learn courses**: Access our free [Learn dbt Cloud](https://courses.getdbt.com/collections) video courses for on-demand training.
+- **dbt Learn courses**: Access our free [Learn dbt Cloud](https://learn.getdbt.com) video courses for on-demand training.
- **dbt Community:** Join the [dbt Community](https://community.getdbt.com/) to connect with other dbt users, ask questions, and share best practices.
- **dbt Support team:** Our [dbt Support team](/docs/dbt-support) is always available to help you troubleshoot your dbt Cloud issues. Create a support ticket in dbt Cloud and we’ll be happy to help!
- **Account management** Enterprise accounts have an account management team available to help troubleshoot solutions and account management assistance. [Book a demo](https://www.getdbt.com/contact) to learn more.
@@ -189,7 +189,7 @@ For the next steps, you can continue exploring our 3-part-guide series on moving
### Related content
-- [Learn dbt Cloud](https://courses.getdbt.com/collections) courses
+- [Learn dbt Cloud](https://learn.getdbt.com) courses
- Book [expert-led demos](https://www.getdbt.com/resources/dbt-cloud-demos-with-experts) and insights
- Work with the [dbt Labs’ Professional Services](https://www.getdbt.com/dbt-labs/services) team to support your data organization and migration.
- [How dbt Cloud compares with dbt Core](https://www.getdbt.com/product/dbt-core-vs-dbt-cloud) for a detailed comparison of dbt Core and dbt Cloud.
diff --git a/website/docs/guides/core-to-cloud-1.md b/website/docs/guides/core-to-cloud-1.md
index 19ef688fe88..0a7dbf4dac8 100644
--- a/website/docs/guides/core-to-cloud-1.md
+++ b/website/docs/guides/core-to-cloud-1.md
@@ -50,7 +50,7 @@ This guide outlines the steps you need to take to move from dbt Core to dbt Clou
- [What's next?](https://docs.getdbt.com/guides/core-to-cloud-1?step=11): Summarizes key takeaways and introduces what to expect in the following guides.
### Related docs
-- [Learn dbt Cloud](https://courses.getdbt.com/collections) on-demand video learning.
+- [Learn dbt Cloud](https://learn.getdbt.com) on-demand video learning.
- Book [expert-led demos](https://www.getdbt.com/resources/dbt-cloud-demos-with-experts) and insights
- Work with the [dbt Labs’ Professional Services](https://www.getdbt.com/dbt-labs/services) team to support your data organization and migration.
@@ -267,7 +267,7 @@ For the next steps, you can continue exploring our 3-part-guide series on moving
### Related docs
-- [Learn dbt Cloud](https://courses.getdbt.com/collections) video courses for on-demand learning.
+- [Learn dbt Cloud](https://learn.getdbt.com) video courses for on-demand learning.
- Book [expert-led demos](https://www.getdbt.com/resources/dbt-cloud-demos-with-experts) and insights.
- Work with the [dbt Labs’ Professional Services](https://www.getdbt.com/dbt-labs/services) team to support your data organization and migration.
- [How dbt Cloud compares with dbt Core](https://www.getdbt.com/product/dbt-core-vs-dbt-cloud) for a detailed comparison of dbt Core and dbt Cloud.
diff --git a/website/docs/guides/core-to-cloud-3.md b/website/docs/guides/core-to-cloud-3.md
index d343ef6b64d..8e77ae8ab15 100644
--- a/website/docs/guides/core-to-cloud-3.md
+++ b/website/docs/guides/core-to-cloud-3.md
@@ -144,7 +144,7 @@ Leverage the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), powered
### Tips
- Define semantic models and metrics once in dbt Cloud with the [dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) (powered by MetricFlow). Reuse them across various analytics platforms, reducing redundancy and errors.
- Use the [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview) to query metrics in downstream tools for consistent, reliable data metrics.
-- Connect to several data applications, from business intelligence tools to notebooks, spreadsheets, data catalogs, and more, to query your metrics. [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) include Tableau, Google Sheets, Hex, and more.
+- Connect to several data applications, from business intelligence tools to notebooks, spreadsheets, data catalogs, and more, to query your metrics. [Available integrations](/docs/cloud-integrations/avail-sl-integrations) include Tableau, Google Sheets, Hex, and more.
- Use [exports](/docs/use-dbt-semantic-layer/exports) to write commonly used queries directly within your data platform, on a schedule.
### Caveats
@@ -189,7 +189,7 @@ For the next steps, continue exploring our 3-part-guide series on moving from db
If you need any additional help or have some questions, use the following resources:
-- [dbt Cloud learn courses](https://courses.getdbt.com/collections) for on-demand video learning.
+- [dbt Learn courses](https://learn.getdbt.com) for on-demand video learning.
- Our [Support team](https://docs.getdbt.com/docs/dbt-support) is always available to help you troubleshoot your dbt Cloud issues.
- Join the [dbt Community](https://community.getdbt.com/) to connect with other dbt users, ask questions, and share best practices.
- Subscribe to the [dbt Cloud RSS alerts](https://status.getdbt.com/)
diff --git a/website/docs/guides/databricks-qs.md b/website/docs/guides/databricks-qs.md
index 4e775429657..b969786b384 100644
--- a/website/docs/guides/databricks-qs.md
+++ b/website/docs/guides/databricks-qs.md
@@ -23,7 +23,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with Databricks. It
- Schedule a job to run.
:::tip Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
:::
### Prerequisites
@@ -33,7 +33,7 @@ You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamen
### Related content
-- Learn more with [dbt Courses](https://courses.getdbt.com/collections)
+- Learn more with [dbt Learn courses](https://learn.getdbt.com)
- [CI jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [Job notifications](/docs/deploy/job-notifications)
diff --git a/website/docs/guides/dbt-python-snowpark.md b/website/docs/guides/dbt-python-snowpark.md
index f8406dc98c5..f6d54ee738f 100644
--- a/website/docs/guides/dbt-python-snowpark.md
+++ b/website/docs/guides/dbt-python-snowpark.md
@@ -32,7 +32,7 @@ All code in today’s workshop can be found on [GitHub](https://github.com/dbt-l
### What you need to know
- Basic to intermediate SQL and python.
-- Basic understanding of dbt fundamentals. We recommend the [dbt Fundamentals course](https://courses.getdbt.com/collections) if you're interested.
+- Basic understanding of dbt fundamentals. We recommend the [dbt Fundamentals course](https://learn.getdbt.com) if you're interested.
- High level machine learning process (encoding, training, testing)
- Simple ML algorithms — we will use logistic regression to keep the focus on the *workflow*, not algorithms!
@@ -436,7 +436,7 @@ Sources allow us to create a dependency between our source database object and o
Staging models are the base of our project, where we bring all the individual components we're going to use to build our more complex and useful models into the project.
-Since we want to focus on dbt and Python in this workshop, check out our [sources](/docs/build/sources) and [staging](/best-practices/how-we-structure/2-staging) docs if you want to learn more (or take our [dbt Fundamentals](https://courses.getdbt.com/collections) course which covers all of our core functionality).
+Since we want to focus on dbt and Python in this workshop, check out our [sources](/docs/build/sources) and [staging](/best-practices/how-we-structure/2-staging) docs if you want to learn more (or take our [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) course which covers all of our core functionality).
### 1. Create sources
@@ -1875,7 +1875,7 @@ To start, let’s look back at our `intermediate.md` file. We can see that we pr
3. View the mini-lineage that looks at the model we are currently selected on (`int_results` in this case).
-4. In our `dbt_project.yml`, we configured `node_colors` depending on the file directory. Starting in dbt v1.3, we can see how our lineage in our docs looks. By color coding your project, it can help you cluster together similar models or steps and more easily troubleshoot.
+4. In our `dbt_project.yml`, we configured `node_colors` depending on the file directory. By color coding your project, it can help you cluster together similar models or steps and more easily troubleshoot when viewing lineage in your docs.
diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md
index 3a0a607e523..be6f2ca205e 100644
--- a/website/docs/guides/mesh-qs.md
+++ b/website/docs/guides/mesh-qs.md
@@ -23,7 +23,7 @@ This guide will teach you how to set up a multi-project design using foundationa
For more information on why data mesh is important, read this post: [What is data mesh? The definition and importance of data mesh](https://www.getdbt.com/blog/what-is-data-mesh-the-definition-and-importance-of-data-mesh).
:::tip Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube.com/watch?v=kbCkwhySV_I&list=PL0QYlrC86xQm7CoOH6RS7hcgLnd3OQioG).
:::
@@ -49,7 +49,7 @@ To leverage dbt Mesh, you need the following:
- [Fabric](https://docs.getdbt.com/guides/microsoft-fabric?step=2)
- [Starburst Galaxy](https://docs.getdbt.com/guides/starburst-galaxy?step=2)
-This guide assumes you have experience with or fundamental knowledge of dbt. Take the [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) course first if you are brand new to dbt.
+This guide assumes you have experience with or fundamental knowledge of dbt. Take the [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) course first if you are brand new to dbt.
## Create and configure two projects
diff --git a/website/docs/guides/microsoft-fabric-qs.md b/website/docs/guides/microsoft-fabric-qs.md
index af14b020182..c3800e63f7a 100644
--- a/website/docs/guides/microsoft-fabric-qs.md
+++ b/website/docs/guides/microsoft-fabric-qs.md
@@ -27,7 +27,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with [Microsoft Fabr
- As a Microsoft admin, you’ve enabled service principal authentication. You must add the service principal to the Microsoft Fabric workspace with either a Member (recommended) or Admin permission set. For details, refer to [Enable service principal authentication](https://learn.microsoft.com/en-us/fabric/admin/metadata-scanning-enable-read-only-apis) in the Microsoft docs. dbt Cloud needs these authentication credentials to connect to Microsoft Fabric.
### Related content
-- [dbt Courses](https://courses.getdbt.com/collections)
+- [dbt Learn courses](https://learn.getdbt.com)
- [About continuous integration jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [Job notifications](/docs/deploy/job-notifications)
diff --git a/website/docs/guides/productionize-your-dbt-databricks-project.md b/website/docs/guides/productionize-your-dbt-databricks-project.md
index 109d64e8282..33f25070bdb 100644
--- a/website/docs/guides/productionize-your-dbt-databricks-project.md
+++ b/website/docs/guides/productionize-your-dbt-databricks-project.md
@@ -191,7 +191,7 @@ To get the most out of both tools, you can use the [persist docs config](/refere
### Related docs
-- [Advanced deployments course](https://courses.getdbt.com/courses/advanced-deployment) if you want a deeper dive into these topics
+- [Advanced Deployment course](https://learn.getdbt.com/courses/advanced-deployment) if you want a deeper dive into these topics
- [Autoscaling CI: The intelligent Slim CI](https://docs.getdbt.com/blog/intelligent-slim-ci)
- [Trigger a dbt Cloud Job in your automated workflow with Python](https://discourse.getdbt.com/t/triggering-a-dbt-cloud-job-in-your-automated-workflow-with-python/2573)
- [Databricks + dbt Cloud Quickstart Guide](/guides/databricks)
diff --git a/website/docs/guides/redshift-qs.md b/website/docs/guides/redshift-qs.md
index 9da15973e37..e3685595804 100644
--- a/website/docs/guides/redshift-qs.md
+++ b/website/docs/guides/redshift-qs.md
@@ -22,7 +22,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with Redshift. It wi
- Schedule a job to run
:::tips Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+Check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
:::
### Prerequisites
@@ -32,7 +32,7 @@ You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamen
### Related content
-- Learn more with [dbt Courses](https://courses.getdbt.com/collections)
+- Learn more with [dbt Learn courses](https://learn.getdbt.com)
- [CI jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
- [Job notifications](/docs/deploy/job-notifications)
diff --git a/website/docs/guides/refactoring-legacy-sql.md b/website/docs/guides/refactoring-legacy-sql.md
index 13896c3ace3..afcfba1e4b0 100644
--- a/website/docs/guides/refactoring-legacy-sql.md
+++ b/website/docs/guides/refactoring-legacy-sql.md
@@ -33,7 +33,7 @@ When migrating and refactoring code, it’s of course important to stay organize
Let's get into it!
:::info More resources
-This guide is excerpted from the new dbt Learn On-demand Course, "Refactoring SQL for Modularity" - if you're curious, pick up the [free refactoring course here](https://courses.getdbt.com/courses/refactoring-sql-for-modularity), which includes example and practice refactoring projects. Or for a more in-depth look at migrating DDL and DML from stored procedures, refer to the[Migrate from stored procedures](/guides/migrate-from-stored-procedures) guide.
+This guide is excerpted from the new dbt Learn On-demand Course, "Refactoring SQL for Modularity" - if you're curious, pick up the [free refactoring course here](https://learn.getdbt.com/courses/refactoring-sql-for-modularity), which includes example and practice refactoring projects. Or for a more in-depth look at migrating DDL and DML from stored procedures, refer to the[Migrate from stored procedures](/guides/migrate-from-stored-procedures) guide.
:::
## Migrate your existing SQL code
@@ -256,7 +256,7 @@ Under the hood, it generates comparison queries between our before and after sta
Sure, we could write our own query manually to audit these models, but using the dbt `audit_helper` package gives us a head start and allows us to identify variances more quickly.
### Ready for refactoring practice?
-Head to the free on-demand course, [Refactoring from Procedural SQL to dbt](https://courses.getdbt.com/courses/refactoring-sql-for-modularity) for a more in-depth refactoring example + a practice refactoring problem to test your skills.
+Head to the free on-demand course, [Refactoring from Procedural SQL to dbt](https://learn.getdbt.com/courses/refactoring-sql-for-modularity) for a more in-depth refactoring example + a practice refactoring problem to test your skills.
Questions on this guide or the course? Drop a note in #learn-on-demand in [dbt Community Slack](https://getdbt.com/community).
diff --git a/website/docs/guides/sl-migration.md b/website/docs/guides/sl-migration.md
index 76446666638..8a0f6199fae 100644
--- a/website/docs/guides/sl-migration.md
+++ b/website/docs/guides/sl-migration.md
@@ -138,6 +138,6 @@ If you created a new environment in [Step 3](#step-3-setup-the-semantic-layer-in
- [dbt metrics converter](https://github.com/dbt-labs/dbt-converter)
- [Why we're deprecating the dbt_metrics package](/blog/deprecating-dbt-metrics) blog post
- [dbt Semantic Layer API query syntax](/docs/dbt-cloud-apis/sl-jdbc#querying-the-api-for-metric-metadata)
-- [dbt Semantic Layer on-demand courses](https://courses.getdbt.com/courses/semantic-layer)
+- [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer)
diff --git a/website/docs/guides/sl-snowflake-qs.md b/website/docs/guides/sl-snowflake-qs.md
index 2fb63f0296d..4310710383c 100644
--- a/website/docs/guides/sl-snowflake-qs.md
+++ b/website/docs/guides/sl-snowflake-qs.md
@@ -118,7 +118,7 @@ Open a new tab and follow these quick steps for account setup and data loading i
- Create a [trial Snowflake account](https://signup.snowflake.com/):
- Select the Enterprise Snowflake edition with ACCOUNTADMIN access. Consider organizational questions when choosing a cloud provider, refer to Snowflake's [Introduction to Cloud Platforms](https://docs.snowflake.com/en/user-guide/intro-cloud-platforms).
- Select a cloud provider and region. All cloud providers and regions will work so choose whichever you prefer.
-- Basic understanding of SQL and dbt. For example, you've used dbt before or have completed the [dbt Fundamentals](https://courses.getdbt.com/collections) course.
+- Basic understanding of SQL and dbt. For example, you've used dbt before or have completed the [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) course.
### What you'll learn
@@ -992,7 +992,7 @@ For next steps, you can start defining your own metrics and learn additional con
Here are some additional resources to help you continue your journey:
- [dbt Semantic Layer FAQs](/docs/use-dbt-semantic-layer/sl-faqs)
-- [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations)
+- [Available integrations](/docs/cloud-integrations/avail-sl-integrations)
- Demo on [how to define and query metrics with MetricFlow](https://www.loom.com/share/60a76f6034b0441788d73638808e92ac?sid=861a94ac-25eb-4fd8-a310-58e159950f5a)
- [Join our live demos](https://www.getdbt.com/resources/webinars/dbt-cloud-demos-with-experts)
diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md
index 8f82b917a36..01646691c8e 100644
--- a/website/docs/guides/snowflake-qs.md
+++ b/website/docs/guides/snowflake-qs.md
@@ -25,7 +25,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with Snowflake. It w
Snowflake also provides a quickstart for you to learn how to use dbt Cloud. It makes use of a different public dataset (Knoema Economy Data Atlas) than what's shown in this guide. For more information, refer to [Accelerating Data Teams with dbt Cloud & Snowflake](https://quickstarts.snowflake.com/guide/accelerating_data_teams_with_snowflake_and_dbt_cloud_hands_on_lab/) in the Snowflake docs.
:::tip Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube.com/watch?v=kbCkwhySV_I&list=PL0QYlrC86xQm7CoOH6RS7hcgLnd3OQioG).
:::
@@ -37,7 +37,7 @@ You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube.
### Related content
-- Learn more with [dbt Courses](https://courses.getdbt.com/collections)
+- Learn more with [dbt Learn courses](https://learn.getdbt.com)
- [How we configure Snowflake](https://blog.getdbt.com/how-we-configure-snowflake/)
- [CI jobs](/docs/deploy/continuous-integration)
- [Deploy jobs](/docs/deploy/deploy-jobs)
diff --git a/website/docs/guides/starburst-galaxy-qs.md b/website/docs/guides/starburst-galaxy-qs.md
index bad76db922e..e730abd1fed 100644
--- a/website/docs/guides/starburst-galaxy-qs.md
+++ b/website/docs/guides/starburst-galaxy-qs.md
@@ -24,7 +24,7 @@ In this quickstart guide, you'll learn how to use dbt Cloud with [Starburst Gala
- Connect to multiple data sources in addition to your S3 bucket.
:::tip Videos for you
-You can check out [dbt Fundamentals](https://courses.getdbt.com/courses/fundamentals) for free if you're interested in course learning with videos.
+You can check out [dbt Fundamentals](https://learn.getdbt.com/courses/dbt-fundamentals) for free if you're interested in course learning with videos.
You can also watch the [Build Better Data Pipelines with dbt and Starburst](https://www.youtube.com/watch?v=tfWm4dWgwRg) YouTube video produced by Starburst Data, Inc.
:::
@@ -41,7 +41,7 @@ You can also watch the [Build Better Data Pipelines with dbt and Starburst](http
### Related content
-- [dbt Courses](https://courses.getdbt.com/collections)
+- [dbt Learn courses](https://learn.getdbt.com)
- [dbt Cloud CI job](/docs/deploy/continuous-integration)
- [Job notifications](/docs/deploy/job-notifications)
- [Source freshness](/docs/deploy/source-freshness)
diff --git a/website/docs/reference/artifacts/manifest-json.md b/website/docs/reference/artifacts/manifest-json.md
index 47a9849eda5..5a487f2f177 100644
--- a/website/docs/reference/artifacts/manifest-json.md
+++ b/website/docs/reference/artifacts/manifest-json.md
@@ -36,7 +36,7 @@ All resources nested within `nodes`, `sources`, `metrics`, `exposures`, `macros`
- `name`: Resource name.
- `unique_id`: `..`, same as dictionary key
- `package_name`: Name of package that defines this resource.
-- `root_path`: Absolute file path of this resource's package. (**Note:** This is removed for most node types in dbt Core v1.4 / manifest v8 to reduce duplicative information across nodes, but it is still present for seeds.)
+- `root_path`: Absolute file path of this resource's package. (**Note:** This was removed for most node types in dbt Core v1.4 / manifest v8 to reduce duplicative information across nodes, but it is still present for seeds.)
- `path`: Relative file path of this resource's definition within its "resource path" (`model-paths`, `seed-paths`, etc.).
- `original_file_path`: Relative file path of this resource's definition, including its resource path.
diff --git a/website/docs/reference/artifacts/run-results-json.md b/website/docs/reference/artifacts/run-results-json.md
index aa99976bfa5..192b34a85f3 100644
--- a/website/docs/reference/artifacts/run-results-json.md
+++ b/website/docs/reference/artifacts/run-results-json.md
@@ -24,7 +24,7 @@ Note: `dbt source freshness` produces a different artifact, [`sources.json`](/re
### Top-level keys
- [`metadata`](/reference/artifacts/dbt-artifacts#common-metadata)
-- `args`: Dictionary of arguments passed to the CLI command or RPC method that produced this artifact. Most useful is `which` (command) or `rpc_method`. This dict excludes null values, and includes default values if they are not null. Equivalent to [`invocation_args_dict`](/reference/dbt-jinja-functions/flags#invocation_args_dict) in the dbt-Jinja context.
+- `args`: Dictionary of arguments passed to the CLI command or RPC method that produced this artifact. Most useful is `which` (command) or `rpc_method`. This dict excludes null values, and includes default values if they are not null. Equivalent to [`invocation_args_dict`](/reference/dbt-jinja-functions/flags#invocation_args_dict) in the dbt-Jinja context.
- `elapsed_time`: Total invocation time in seconds.
- `results`: Array of node execution details.
diff --git a/website/docs/reference/commands/list.md b/website/docs/reference/commands/list.md
index 5caabdc2b2e..e73699dc78c 100644
--- a/website/docs/reference/commands/list.md
+++ b/website/docs/reference/commands/list.md
@@ -73,16 +73,6 @@ $ dbt ls --select snowplow.* --output json
**Listing JSON output with custom keys**
-
-
-```
-$ dbt ls --select snowplow.* --output json --output-keys "name resource_type description"
-{"name": "snowplow_events", "description": "This is a pretty cool model", ...}
-{"name": "snowplow_page_views", "description": "This model is even cooler", ...}
-...
-```
-
-
```
diff --git a/website/docs/reference/commands/run.md b/website/docs/reference/commands/run.md
index 609503e62dd..26db40cb7e4 100644
--- a/website/docs/reference/commands/run.md
+++ b/website/docs/reference/commands/run.md
@@ -36,12 +36,8 @@ dbt run --full-refresh
-
-
You can also supply the flag by its short name: `dbt run -f`.
-
-
In the dbt compilation context, this flag will be available as [flags.FULL_REFRESH](/reference/dbt-jinja-functions/flags). Further, the `is_incremental()` macro will return `false` for *all* models in response when the `--full-refresh` flag is specified.
diff --git a/website/docs/reference/dbt-jinja-functions/builtins.md b/website/docs/reference/dbt-jinja-functions/builtins.md
index 9f1d139ab20..0da7c6536a8 100644
--- a/website/docs/reference/dbt-jinja-functions/builtins.md
+++ b/website/docs/reference/dbt-jinja-functions/builtins.md
@@ -59,22 +59,6 @@ It includes logic to extract user-provided arguments, including version
-
-
-From dbt v1.4 and lower, use the following macro to override the `ref` method available in the model compilation context to return a [Relation](/reference/dbt-classes#relation) with the database name overriden to `dev`:
-
-```
-
-{% macro ref(model_name) %}
-
- {% set rel = builtins.ref(model_name) %}
- {% set newrel = rel.replace_path(database="dev") %}
- {% do return(newrel) %}
-
-{% endmacro %}
-```
-
-
Logic within the ref macro can also be used to control which elements of the model path are rendered when run, for example the following logic renders only the schema and object identifier, but not the database reference i.e. `my_schema.my_model` rather than `my_database.my_schema.my_model`. This is especially useful when using snowflake as a warehouse, if you intend to change the name of the database post-build and wish the references to remain accurate.
```
diff --git a/website/docs/reference/dbt-jinja-functions/cross-database-macros.md b/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
index e71f6096a2c..df3896b1cbb 100644
--- a/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
+++ b/website/docs/reference/dbt-jinja-functions/cross-database-macros.md
@@ -18,8 +18,6 @@ Please make sure to take a look at the [SQL expressions section](#sql-expression
## All functions (alphabetical)
-
-
- [Cross-database macros](#cross-database-macros)
- [All functions (alphabetical)](#all-functions-alphabetical)
- [Data type functions](#data-type-functions)
@@ -66,10 +64,6 @@ Please make sure to take a look at the [SQL expressions section](#sql-expression
- [Date and time parts](#date-and-time-parts)
- [SQL expressions](#sql-expressions)
-
-
-
-
[**Data type functions**](#data-type-functions)
- [type_bigint](#type_bigint)
- [type_boolean](#type_boolean)
@@ -79,21 +73,16 @@ Please make sure to take a look at the [SQL expressions section](#sql-expression
- [type_string](#type_string)
- [type_timestamp](#type_timestamp)
-
[**Set functions**](#set-functions)
- [except](#except)
- [intersect](#intersect)
-
-
[**Array functions**](#array-functions)
- [array_append](#array_append)
- [array_concat](#array_concat)
- [array_construct](#array_construct)
-
-
[**String functions**](#string-functions)
- [concat](#concat)
- [hash](#hash)
@@ -145,9 +134,8 @@ This macro yields the database-specific data type for a `BIGINT`.
bigint
```
-
-
### type_boolean
+
__Args__:
* None
@@ -166,9 +154,8 @@ This macro yields the database-specific data type for a `BOOLEAN`.
BOOLEAN
```
-
-
### type_float
+
__Args__:
* None
@@ -327,8 +314,6 @@ __Args__:
intersect
```
-
-
## Array functions
### array_append
@@ -401,8 +386,6 @@ array[ 1 , 2 , 3 , 4 ]
array[ 'blue' , 'green' ]
```
-
-
## String functions
### concat
diff --git a/website/docs/reference/dbt-jinja-functions/env_var.md b/website/docs/reference/dbt-jinja-functions/env_var.md
index 6410e4026df..a239ce4ff13 100644
--- a/website/docs/reference/dbt-jinja-functions/env_var.md
+++ b/website/docs/reference/dbt-jinja-functions/env_var.md
@@ -80,8 +80,6 @@ Any env var named with the prefix `DBT_ENV_CUSTOM_ENV_` will be included in two
- [dbt artifacts](/reference/artifacts/dbt-artifacts#common-metadata): `metadata` -> `env`
- [events and structured logs](/reference/events-logging#info-fields): `info` -> `extra`
-
-
A dictionary of these prefixed env vars will also be available in a `dbt_metadata_envs` context variable:
```sql
-- {{ dbt_metadata_envs }}
@@ -98,8 +96,6 @@ Compiles to:
select 1 as id
```
-
-
### dbt Cloud usage
If you are using dbt Cloud, you must adhere to the naming conventions for environment variables. Environment variables in dbt Cloud must be prefixed with `DBT_` (including `DBT_ENV_CUSTOM_ENV_` or `DBT_ENV_SECRET_``DBT_ENV_SECRET`). Environment variables keys are uppercased and case sensitive. When referencing `{{env_var('DBT_KEY')}}` in your project's code, the key must match exactly the variable defined in dbt Cloud's UI.
diff --git a/website/docs/reference/dbt-jinja-functions/flags.md b/website/docs/reference/dbt-jinja-functions/flags.md
index 6fe985cc93e..534a0fa8987 100644
--- a/website/docs/reference/dbt-jinja-functions/flags.md
+++ b/website/docs/reference/dbt-jinja-functions/flags.md
@@ -30,8 +30,6 @@ Recommended use cases include:
**Note:** It is _not_ recommended to use flags as an input to parse-time configurations, properties, or dependencies (`ref` + `source`). Flags are likely to change in every invocation of dbt, and their parsed values will become stale (and yield incorrect results) in subsequent invocations that have partial parsing enabled. For more details, see [the docs on parsing](/reference/parsing).
-
-
### invocation_args_dict
For the full set of information passed from the CLI—subcommand, flags, arguments—you can use `invocation_args_dict`. This is equivalent to the `args` dictionary in [`run_results.json`](/reference/artifacts/run-results-json).
@@ -50,7 +48,7 @@ select 1 as id
-
+
```shell
$ DBT_ENV_CUSTOM_ENV_MYVAR=myvalue dbt compile -s my_model
@@ -70,9 +68,6 @@ select 1 as id
-
-
-
The `invocation_command` key within `invocation_args_dict` includes the entire subcommand when it compiles:
diff --git a/website/docs/reference/dbt-jinja-functions/graph.md b/website/docs/reference/dbt-jinja-functions/graph.md
index 491b7836f45..bea09c326e8 100644
--- a/website/docs/reference/dbt-jinja-functions/graph.md
+++ b/website/docs/reference/dbt-jinja-functions/graph.md
@@ -23,55 +23,6 @@ to understand how to effectively use this variable.
The `graph` context variable is a dictionary which maps node ids onto dictionary
representations of those nodes. A simplified example might look like:
-
-
-```json
-{
- "nodes": {
- "model.my_project.model_name": {
- "unique_id": "model.my_project.model_name",
- "config": {"materialized": "table", "sort": "id"},
- "tags": ["abc", "123"],
- "path": "models/path/to/model_name.sql",
- ...
- },
- ...
- },
- "sources": {
- "source.my_project.snowplow.event": {
- "unique_id": "source.my_project.snowplow.event",
- "database": "analytics",
- "schema": "analytics",
- "tags": ["abc", "123"],
- "path": "models/path/to/schema.yml",
- ...
- },
- ...
- },
- "exposures": {
- "exposure.my_project.traffic_dashboard": {
- "unique_id": "source.my_project.traffic_dashboard",
- "type": "dashboard",
- "maturity": "high",
- "path": "models/path/to/schema.yml",
- ...
- },
- ...
- },
- "metrics": {
- "metric.my_project.count_all_events": {
- "unique_id": "metric.my_project.count_all_events",
- "type": "count",
- "path": "models/path/to/schema.yml",
- ...
- },
- ...
- }
-}
-```
-
-
-
```json
diff --git a/website/docs/reference/dbt-jinja-functions/model.md b/website/docs/reference/dbt-jinja-functions/model.md
index 9ccf0759470..59c3421d856 100644
--- a/website/docs/reference/dbt-jinja-functions/model.md
+++ b/website/docs/reference/dbt-jinja-functions/model.md
@@ -46,7 +46,7 @@ If you're using the CLI, use [log()](/reference/dbt-jinja-functions/log) to prin
To view the structure of `models` and their definitions:
- Refer to [dbt JSON Schema](https://schemas.getdbt.com/) for describing and consuming dbt generated artifacts
-- Select the corresponding manifest version under **Manifest**. For example if you're on dbt v1.3, then you would select Manifest v7
+- Select the corresponding manifest version under **Manifest**. For example if you're on dbt v1.8, then you would select Manifest v12
* The `manifest.json` version number is related to (but not _equal_ to) your dbt version, so you _must_ use the correct `manifest.json` version for your dbt version. To find the correct `manifest.json` version, refer to [Manifest](/reference/artifacts/manifest-json) and select the dbt version on the top navigation (such as `v1.5`). This will help you find out which tags are associated with your model.
- Then go to `nodes` --> Select Additional properties --> `CompiledModelNode` or view other definitions/objects.
diff --git a/website/docs/reference/dbt_project.yml.md b/website/docs/reference/dbt_project.yml.md
index 65f9761c9bc..6166be1df6d 100644
--- a/website/docs/reference/dbt_project.yml.md
+++ b/website/docs/reference/dbt_project.yml.md
@@ -3,12 +3,6 @@ Every [dbt project](/docs/build/projects) needs a `dbt_project.yml` file — thi
- dbt uses [YAML](https://yaml.org/) in a few different places. If you're new to YAML, it would be worth learning how arrays, dictionaries, and strings are represented.
-
-
-- By default, dbt looks for the `dbt_project.yml` in your current working directory and its parents, but you can set a different directory using the `--project-dir` flag.
-
-
-
- By default, dbt looks for the `dbt_project.yml` in your current working directory and its parents, but you can set a different directory using the `--project-dir` flag or the `DBT_PROJECT_DIR` environment variable.
diff --git a/website/docs/reference/events-logging.md b/website/docs/reference/events-logging.md
index 603a60f2640..de79d8d9171 100644
--- a/website/docs/reference/events-logging.md
+++ b/website/docs/reference/events-logging.md
@@ -32,9 +32,7 @@ As dbt runs, it generates events. The most common way to see those events is as
_For more details about how the eventing system has been implemented in dbt-core, see the [`events` module README](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/events/README.md)._
-
-
-Starting in v1.4, the structure of each event in `dbt-core` is backed by a schema defined using [protocol buffers](https://developers.google.com/protocol-buffers). All schemas are defined in the [`types.proto`](https://github.com/dbt-labs/dbt-core/blob/3bf148c443e6b1da394b62e88a08f1d7f1d8ccaa/core/dbt/events/core_types.proto) file within the `dbt-core` codebase.
+The structure of each event in `dbt-core` is backed by a schema defined using [protocol buffers](https://developers.google.com/protocol-buffers). All schemas are defined in the [`types.proto`](https://github.com/dbt-labs/dbt-core/blob/3bf148c443e6b1da394b62e88a08f1d7f1d8ccaa/core/dbt/events/core_types.proto) file within the `dbt-core` codebase.
Every event has the same two top-level keys:
- `info`: Information common to all events. See the table below for the breakdown.
@@ -67,7 +65,7 @@ Many events are fired while compiling or running a specific DAG node (model, see
| `node_finished_at` | Timestamp when node processing completed |
| `node_name` | Name of this model/seed/test/etc |
| `node_path` | File path to where this resource is defined |
-| `node_relation` | Nested object containing this node's database representation: `database`, `schema`, `alias`, and full `relation_name` with quoting & inclusion policies appliedAdded in v1.5 |
+| `node_relation` | Nested object containing this node's database representation: `database`, `schema`, `alias`, and full `relation_name` with quoting & inclusion policies applied |
| `node_started_at` | Timestamp when node processing started |
| `node_status` | Current status of the node, either `RunningStatus` (while running) or `NodeStatus` (finished) as defined in [the result contract](https://github.com/dbt-labs/dbt-core/blob/eba90863ed4043957330ea44ca267db1a2d81fcd/core/dbt/contracts/results.py#L75-L88) |
| `resource_type` | `model`, `test`, `seed`, `snapshot`, etc. |
@@ -119,84 +117,8 @@ Many events are fired while compiling or running a specific DAG node (model, see
}
```
-
-
-
-
-When `json` [log formatting](/reference/global-configs/logs) is enabled, dbt will output produce rich, structured log information which can be piped into monitoring tools for analysis, or to power applications with dbt metadata in real time.
-
-Each log line will have the following properties:
-
-| Field | Description |
-|-------------|---------------|
-| `code` | A unique identifier for each event type |
-| `data` | A dictionary containing programmatically accessible information about the log line. The contents of this dictionary vary based on the event type which generated this log message. |
-| [`invocation_id`](/reference/dbt-jinja-functions/invocation_id) | A unique identifier for this invocation of dbt |
-| `level` | A string representation of the log level (`debug`, `info`, `warn`, `error`) |
-| `log_version` | Integer indicating version |
-| `msg` | The human-friendly log message. **Note**: This message is not intended for machine consumption. Log messages are subject to change in future versions of dbt, and those changes may or may not coincide with a change in `log_version`. |
-| `node_info` | If applicable, a dictionary of human- and machine-friendly information about a currently running resource |
-| `pid` | The process ID for the running dbt invocation which produced this log message |
-| `thread_name` | The thread in which the log message was produced, helpful for tracking queries when dbt is run with multiple threads |
-| `ts` | When the log line was printed |
-| `type` | Always `log_line` |
-
-If available, `node_info` will include:
-
-| Field | Description |
-|-------------|---------------|
-| `materialized` | view, table, incremental, etc. |
-| `node_finished_at` | Timestamp when node processing completed |
-| `node_name` | Name of this model/seed/test/etc |
-| `node_path` | File path to where this resource is defined |
-| `node_started_at` | Timestamp when node processing started |
-| `node_status` | Current status of the node, as defined in [the result contract](https://github.com/dbt-labs/dbt-core/blob/HEAD/core/dbt/contracts/results.py#L61-L74) |
-| `resource_type` | model, test, seed, snapshot, etc. |
-| `type` | Always `'node_status'` |
-| `unique_id` | The unique identifier for this resource, which can be used to look up contextual information in a [manifest](/reference/artifacts/manifest-json) |
-
-### Example
-
-```json
-{
- "code": "Q033",
- "data":
- {
- "description": "view model dbt_testing.name_list",
- "index": 7,
- "total": 7
- },
- "invocation_id": "30206572-f52f-4b91-af6d-d2b18fdbbbb8",
- "level": "info",
- "log_version": 1,
- "msg": "7 of 7 START view model dbt_testing.name_list.............................. [RUN]",
- "node_info":
- {
- "materialized": "view",
- "node_finished_at": null,
- "node_name": "male_list_view",
- "node_path": "human/name_list.sql",
- "node_started_at": "2021-12-02T21:47:03.477004",
- "node_status": "started",
- "resource_type": "model",
- "type": "node_status",
- "unique_id": "model.jaffle_shop.name_list"
- },
- "pid": 81915,
- "thread_name": "Thread-4",
- "ts": "2021-12-02T21:47:03.480384Z",
- "type": "log_line"
-}
-```
-
-
-
## Python interface
-:::warning
-The `EVENT_HISTORY` object has been deprecated and removed in dbt Core v1.4+
-:::
-
Older versions of `dbt-core` made available a full history of events fired during an invocation, in the form of an `EVENT_HISTORY` object.
diff --git a/website/docs/reference/exposure-properties.md b/website/docs/reference/exposure-properties.md
index 75f5714f5fe..471c2a00824 100644
--- a/website/docs/reference/exposure-properties.md
+++ b/website/docs/reference/exposure-properties.md
@@ -16,14 +16,8 @@ Exposures are defined in `properties.yml` files nested under an `exposures:` key
You can name these files `whatever_you_want.yml`, and nest them arbitrarily deeply in subfolders within the `models/` directory.
-
-
Exposure names must contain only letters, numbers, and underscores (no spaces or special characters). For a short human-friendly name with title casing, spaces, and special characters, use the `label` property.
-
-
-
-
```yml
@@ -55,43 +49,6 @@ exposures:
```
-
-
-
-
-
-
-```yml
-version: 2
-
-exposures:
- - name:
- [description](/reference/resource-properties/description):
- type: {dashboard, notebook, analysis, ml, application}
- url:
- maturity: {high, medium, low} # Indicates level of confidence or stability in the exposure
- [tags](/reference/resource-configs/tags): []
- [meta](/reference/resource-configs/meta): {}
- owner:
- name:
- email:
-
- depends_on:
- - ref('model')
- - ref('seed')
- - source('name', 'table')
-
- # added in dbt Core v1.3
- label: "Human-Friendly Name for this Exposure!"
- [config](/reference/resource-properties/config):
- enabled: true | false
-
- - name: ... # declare properties of additional exposures
-```
-
-
-
-
## Example
@@ -102,7 +59,7 @@ version: 2
exposures:
- name: weekly_jaffle_metrics
- label: Jaffles by the Week # optional, new in dbt Core v1.3
+ label: Jaffles by the Week # optional
type: dashboard # required
maturity: high # optional
url: https://bi.tool/dashboards/1 # optional
diff --git a/website/docs/reference/global-configs/cache.md b/website/docs/reference/global-configs/cache.md
index a605e1e70f3..7687df30339 100644
--- a/website/docs/reference/global-configs/cache.md
+++ b/website/docs/reference/global-configs/cache.md
@@ -31,36 +31,3 @@ dbt --cache-selected-only run --select salesforce
-
-
-### Cache database objects for selected resource
-
-:::caution Experimental config flag
-This should be considered an _unstable_ config flag because it is experimental and subject to change. We reserve the right to make breaking changes to this config flag.
-:::
-
-At the start of runs, dbt caches metadata about all the objects in all the schemas where it might materialize resources (such as models). By default, dbt caches all schemas related to the project. When this config is enabled, dbt will only cache schemas related to selected resources for the current run. This can offer significant speed improvements when running a small subset of a large project.
-
-For example, to improve speed and performance while focused on developing Salesforce models, which are materialized into their own dedicated schema, you would select those models and pass the `cache-selected-only` flag:
-
-```text
-
-dbt --cache-selected-only run --select salesforce
-
-```
-
-The previous example enables you to start working with your salesforce models, and dbt will only cache those models instead of the entire project.
-
-To set the config in your `profile.yml`:
-
-
-```yaml
-
-config:
- cache_selected_only: true
-
-```
-
-
-
-
diff --git a/website/docs/reference/global-configs/logs.md b/website/docs/reference/global-configs/logs.md
index 7ee7890e302..19ca8da6b5c 100644
--- a/website/docs/reference/global-configs/logs.md
+++ b/website/docs/reference/global-configs/logs.md
@@ -8,12 +8,6 @@ sidebar: "logs"
dbt outputs logs to two different locations: CLI console and the log file.
-
-
-The `LOG_FORMAT` config specifies how dbt's logs should be formatted and has three options: `json`, `text`, and `debug`.
-
-
-
The `LOG_FORMAT` and `LOG_FORMAT_FILE` configs specify how dbt's logs should be formatted, and they each have the same options: `json`, `text`, and `debug`.
@@ -50,12 +44,6 @@ The `json` format outputs fully structured logs in the format
{"data": {"adapter_name": "postgres", "adapter_version": "=1.8.0"}, "info": {"category": "", "code": "E034", "extra": {}, "invocation_id": "82131fa0-d2b4-4a77-9436-019834e22746", "level": "info", "msg": "Registered adapter: postgres=1.8.0", "name": "AdapterRegistered", "pid": 7875, "thread": "MainThread", "ts": "2024-05-29T23:32:56.437986Z"}}
```
-
-
-When the `LOG_FORMAT` is set explicitly, it will take affect in both the console and log files.
-
-
-
When the `LOG_FORMAT` is set explicitly, it will take affect in both the console and log files whereas the `LOG_FORMAT_FILE` only affects the log file.
diff --git a/website/docs/reference/global-configs/print-output.md b/website/docs/reference/global-configs/print-output.md
index 1d44a8bc5f0..fc129b162a7 100644
--- a/website/docs/reference/global-configs/print-output.md
+++ b/website/docs/reference/global-configs/print-output.md
@@ -6,12 +6,6 @@ sidebar: "Print output"
### Suppress `print()` messages in stdout
-
-
-By default, dbt includes [`print()`](/reference/dbt-jinja-functions/print) messages in standard out (stdout). You can use the `DBT_NO_PRINT` environment variable to prevent these messages from showing up in stdout.
-
-
-
By default, dbt includes [`print()`](/reference/dbt-jinja-functions/print) messages in standard out (stdout). You can use the `DBT_PRINT` environment variable to prevent these messages from showing up in stdout.
diff --git a/website/docs/reference/global-configs/resource-type.md b/website/docs/reference/global-configs/resource-type.md
index cab48c6429c..2d7672d108d 100644
--- a/website/docs/reference/global-configs/resource-type.md
+++ b/website/docs/reference/global-configs/resource-type.md
@@ -8,6 +8,10 @@ The `--resource-type` and `--exclude-resource-type` flags include or exclude res
This means the flags enable you to specify which types of resources to include or exclude when running the commands, instead of targeting specific resources.
+:::tip Note
+The `--exclude-resource-type` flag is only available in dbt version 1.8 and higher. If you're using older versions, this flag won't be available.
+:::
+
The available resource types are:
@@ -56,14 +60,14 @@ The available resource types are:
## Example
-Instead of targeting specific resources, use the `--resource-flag` or `--exclude-resource-type` flags to target all resources of a certain type: `dbt build --exclude-resource-type RESOURCE_TYPE` replacing `RESOURCE_TYPE` with the resource type you want to exclude.
+Instead of targeting specific resources, use the `--resource-flag` or `--exclude-resource-type` flags to target all resources of a certain type: `dbt build --resource-type RESOURCE_TYPE` replacing `RESOURCE_TYPE` with the resource type you want to exclude.
-- For example, use the following command to exclude _all_ snapshots from your dbt build process:
+- For example, use the following command to include _all_ snapshots from your dbt build process:
```text
-dbt build --exclude-resource-type snapshot
+dbt build --resource-type snapshot
```
@@ -84,7 +88,7 @@ dbt build --resource-type saved_query
-- In this example, use the following command to exclude _all_ unit tests, from your dbt build process:
+- In this example, use the following command to exclude _all_ unit tests, from your dbt build process. Note that the `--exclude-resource-type` flag is only available in dbt version 1.8 and higher:
diff --git a/website/docs/reference/node-selection/defer.md b/website/docs/reference/node-selection/defer.md
index 81a0f4a0328..bbcc5f7d567 100644
--- a/website/docs/reference/node-selection/defer.md
+++ b/website/docs/reference/node-selection/defer.md
@@ -43,26 +43,16 @@ When using defer, you may be selecting from production datasets, development dat
- if you apply env-specific limits in dev but not prod, as you may end up selecting more data than you expect
- when executing tests that depend on multiple parents (e.g. `relationships`), since you're testing "across" environments
-
-
-Deferral requires both `--defer` and `--state` to be set, either by passing flags explicitly or by setting environment variables (`DBT_DEFER_TO_STATE` and `DBT_ARTIFACT_STATE_PATH`). If you use dbt Cloud, read about [how to set up CI jobs](/docs/deploy/continuous-integration).
-
-
-
Deferral requires both `--defer` and `--state` to be set, either by passing flags explicitly or by setting environment variables (`DBT_DEFER` and `DBT_STATE`). If you use dbt Cloud, read about [how to set up CI jobs](/docs/deploy/continuous-integration).
-
-
#### Favor state
You can optionally skip the second criterion by passing the `--favor-state` flag. If passed, dbt will favor using the node defined in your `--state` namespace, even if the node exists in the current target.
-
-
### Example
In my local development environment, I create all models in my target schema, `dev_alice`. In production, the same models are created in a schema named `prod`.
diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md
index 4408e53c35d..903790100fb 100644
--- a/website/docs/reference/node-selection/methods.md
+++ b/website/docs/reference/node-selection/methods.md
@@ -124,9 +124,6 @@ dbt run --select "config.schema:audit" # run all models that are cr
dbt run --select "config.cluster_by:geo_country" # run all models clustered by `geo_country`
```
-
-
-
While most config values are strings, you can also use the `config` method to match boolean configs, dictionary keys, and values in lists.
For example, given a model with the following configurations:
@@ -150,8 +147,6 @@ dbt ls -s config.grants.select:reporter
dbt ls -s config.transient:true
```
-
-
### The "test_type" method
@@ -202,12 +197,6 @@ dbt test --select "test_name:range_min_max" # run all instances of a custom
**N.B.** State-based selection is a powerful, complex feature. Read about [known caveats and limitations](/reference/node-selection/state-comparison-caveats) to state comparison.
-
-
-The `state` method is used to select nodes by comparing them against a previous version of the same project, which is represented by a [manifest](/reference/artifacts/manifest-json). The file path of the comparison manifest _must_ be specified via the `--state` flag or `DBT_ARTIFACT_STATE_PATH` environment variable.
-
-
-
The `state` method is used to select nodes by comparing them against a previous version of the same project, which is represented by a [manifest](/reference/artifacts/manifest-json). The file path of the comparison manifest _must_ be specified via the `--state` flag or `DBT_STATE` environment variable.
@@ -288,16 +277,6 @@ The following dbt commands produce `sources.json` artifacts whose results can be
After issuing one of the above commands, you can reference the source freshness results by adding a selector to a subsequent command as follows:
-
-
-```bash
-# You can also set the DBT_ARTIFACT_STATE_PATH environment variable instead of the --state flag.
-dbt source freshness # must be run again to compare current to previous state
-dbt build --select "source_status:fresher+" --state path/to/prod/artifacts
-```
-
-
-
```bash
@@ -310,11 +289,6 @@ dbt build --select "source_status:fresher+" --state path/to/prod/artifacts
### The "group" method
-
-
-Supported in v1.5 or newer.
-
-
@@ -329,12 +303,6 @@ dbt run --select "group:finance" # run all models that belong to the finance gro
### The "access" method
-
-
-Supported in v1.5 or newer.
-
-
-
The `access` method selects models based on their [access](/reference/resource-configs/access) property.
@@ -349,12 +317,6 @@ dbt list --select "access:protected" # list all protected models
### The "version" method
-
-
-Supported in v1.5 or newer.
-
-
-
The `version` method selects [versioned models](/docs/collaborate/govern/model-versions) based on their [version identifier](/reference/resource-properties/versions) and [latest version](/reference/resource-properties/latest_version).
diff --git a/website/docs/reference/node-selection/syntax.md b/website/docs/reference/node-selection/syntax.md
index da7a3af4bc3..ce7d27cb6b6 100644
--- a/website/docs/reference/node-selection/syntax.md
+++ b/website/docs/reference/node-selection/syntax.md
@@ -136,13 +136,6 @@ Together, the `state:` selector and deferral enable ["slim CI"](/best-practices/
State and defer can be set by environment variables as well as CLI flags:
-
-
-- `--state` or `DBT_ARTIFACT_STATE_PATH`: file path
-- `--defer` or `DBT_DEFER_TO_STATE`: boolean
-
-
-
- `--state` or `DBT_STATE`: file path
diff --git a/website/docs/reference/node-selection/test-selection-examples.md b/website/docs/reference/node-selection/test-selection-examples.md
index 4d1936863e9..11362b2364b 100644
--- a/website/docs/reference/node-selection/test-selection-examples.md
+++ b/website/docs/reference/node-selection/test-selection-examples.md
@@ -35,35 +35,6 @@ In both cases, `test_type` checks a property of the test itself. These are forms
### Indirect selection
-
-
-There are two modes to configure the behavior when performing indirect selection (with `eager` as the default):
-
-1. `eager` (default) - include ANY test that references the selected nodes
-1. `cautious` - restrict to tests that ONLY refer to selected nodes
-
-Note that test exclusion is always greedy: if ANY parent is explicitly excluded, the test will be excluded as well.
-
-The "cautious" mode can be useful in environments when you're only building a subset of your DAG, and you want to avoid test failures in "eager" mode caused by unbuilt resources. (Another way to achieve this is with [deferral](/reference/node-selection/defer)).
-
-
-
-
-
-There are three modes to configure the behavior when performing indirect selection (with `eager` as the default):
-
-1. `eager` (default) - include ANY test that references the selected nodes
-1. `cautious` - restrict to tests that ONLY refer to selected nodes
-1. `buildable` - restrict to tests that ONLY refer to selected nodes (or their ancestors)
-
-Note that test exclusion is always greedy: if ANY parent is explicitly excluded, the test will be excluded as well.
-
-:::tip Building subsets of a DAG
-The `buildable` and `cautious` modes can be useful when you're only building a subset of your DAG, and you want to avoid test failures in `eager` mode caused by unbuilt resources. You can also achieve this with [deferral](/reference/node-selection/defer).
-:::
-
-
-
@@ -146,99 +117,6 @@ dbt build --select "orders" --indirect-selection=empty
-
-
-
-
-
-
-
-By default, a test will run when ANY parent is selected; we call this "eager" indirect selection. In this example, that would include any test that references orders, even if it references other models as well.
-
-In this mode, any test that depends on unbuilt resources will raise an error.
-
-```shell
-dbt test --select "orders"
-dbt build --select "orders"
-```
-
-
-
-
-
-It is possible to prevent tests from running if one or more of its parents is unselected (and therefore unbuilt); we call this "cautious" indirect selection.
-
-It will only include tests whose references are each within the selected nodes.
-
-Put another way, it will prevent tests from running if one or more of its parents is unselected.
-
-```shell
-
-dbt test --select "orders" --indirect-selection=cautious
-dbt build --select "orders" --indirect-selection=cautious
-
-```
-
-
-
-
-
-This mode is similarly conservative like "cautious", but is slightly more inclusive.
-
-It will only include tests whose references are each within the selected nodes (or their ancestors).
-
-This is useful in the same scenarios as "cautious", but also includes when a test depends on a model **and** a direct ancestor of that model (like confirming an aggregation has the same totals as its input).
-
-```shell
-dbt test --select "orders" --indirect-selection=buildable
-dbt build --select "orders" --indirect-selection=buildable
-
-```
-
-
-
-
-
-
-
-
-
-
-
-
-
-By default, a test will run when ANY parent is selected; we call this "eager" indirect selection. In this example, that would include any test that references orders, even if it references other models as well.
-
-In this mode, any test that depends on unbuilt resources will raise an error.
-
-```shell
-dbt test --select "orders"
-dbt build --select "orders"
-```
-
-
-
-
-
-It is possible to prevent tests from running if one or more of its parents is unselected (and therefore unbuilt); we call this "cautious" indirect selection.
-
-It will only include tests whose references are each within the selected nodes.
-
-Put another way, it will prevent tests from running if one or more of its parents is unselected.
-
-```shell
-
-dbt test --select "orders" --indirect-selection=cautious
-dbt build --select "orders" --indirect-selection=cautious
-
-```
-
-
-
-
-
-
-
### Test selection syntax examples
diff --git a/website/docs/reference/node-selection/yaml-selectors.md b/website/docs/reference/node-selection/yaml-selectors.md
index 79eb53c1bee..d911eb44baa 100644
--- a/website/docs/reference/node-selection/yaml-selectors.md
+++ b/website/docs/reference/node-selection/yaml-selectors.md
@@ -57,51 +57,6 @@ This is the most thorough syntax, which can include the operator-equivalent keyw
Review [methods](/reference/node-selection/methods) for the available list.
-
-
-
-```yml
-definition:
- method: tag
- value: nightly
-
- # Optional keywords map to the `+` and `@` graph operators:
-
- children: true | false
- parents: true | false
-
- children_depth: 1 # if children: true, degrees to include
- parents_depth: 1 # if parents: true, degrees to include
-
- childrens_parents: true | false # @ operator
-
- indirect_selection: eager | cautious # include all tests selected indirectly? eager by default
-```
-
-
-
-
-
-```yml
-definition:
- method: tag
- value: nightly
-
- # Optional keywords map to the `+` and `@` graph operators:
-
- children: true | false
- parents: true | false
-
- children_depth: 1 # if children: true, degrees to include
- parents_depth: 1 # if parents: true, degrees to include
-
- childrens_parents: true | false # @ operator
-
- indirect_selection: eager | cautious | buildable # include all tests selected indirectly? eager by default
-```
-
-
-
```yml
@@ -158,13 +113,6 @@ Note: The `exclude` argument in YAML selectors is subtly different from
the `--exclude` CLI argument. Here, `exclude` _always_ returns a [set difference](https://en.wikipedia.org/wiki/Complement_(set_theory)),
and it is always applied _last_ within its scope.
-
-
-This gets us more intricate subset definitions than what's available on the CLI,
-where we can only pass one "yeslist" (`--select`) and one "nolist" (`--exclude`).
-
-
-
When more than one "yeslist" (`--select`) is passed, they are treated as a [union](/reference/node-selection/set-operators#unions) rather than an [intersection](/reference/node-selection/set-operators#intersections). Same thing when there is more than one "nolist" (`--exclude`).
@@ -173,48 +121,6 @@ When more than one "yeslist" (`--select`) is passed, they are treated as a [unio
#### Indirect selection
-
-
-As a general rule, dbt will indirectly select _all_ tests if they touch _any_ resource that you're selecting directly. We call this "eager" indirect selection. You can optionally switch the indirect selection mode to "cautious" by setting `indirect_selection` for a specific criterion:
-
-```yml
-- union:
- - method: fqn
- value: model_a
- indirect_selection: eager # default: will include all tests that touch model_a
- - method: fqn
- value: model_b
- indirect_selection: cautious # will not include tests touching model_b
- # if they have other unselected parents
-```
-
-If provided, a YAML selector's `indirect_selection` value will take precedence over the CLI flag `--indirect-selection`. Because `indirect_selection` is defined separately for _each_ selection criterion, it's possible to mix eager/cautious modes within the same definition, to achieve the exact behavior that you need. Remember that you can always test out your critiera with `dbt ls --selector`.
-
-
-
-
-
-As a general rule, dbt will indirectly select _all_ tests if they touch _any_ resource that you're selecting directly. We call this "eager" indirect selection. You can optionally switch the indirect selection mode to "cautious" or "buildable" by setting `indirect_selection` for a specific criterion:
-
-```yml
-- union:
- - method: fqn
- value: model_a
- indirect_selection: eager # default: will include all tests that touch model_a
- - method: fqn
- value: model_b
- indirect_selection: cautious # will not include tests touching model_b
- # if they have other unselected parents
- - method: fqn
- value: model_c
- indirect_selection: buildable # will not include tests touching model_c
- # if they have other unselected parents (unless they have an ancestor that is selected)
-```
-
-If provided, a YAML selector's `indirect_selection` value will take precedence over the CLI flag `--indirect-selection`. Because `indirect_selection` is defined separately for _each_ selection criterion, it's possible to mix eager/cautious/buildable modes within the same definition, to achieve the exact behavior that you need. Remember that you can always test out your critiera with `dbt ls --selector`.
-
-
-
As a general rule, dbt will indirectly select _all_ tests if they touch _any_ resource that you're selecting directly. We call this "eager" indirect selection. You can optionally switch the indirect selection mode to "cautious", "buildable", or "empty" by setting `indirect_selection` for a specific criterion:
diff --git a/website/docs/reference/programmatic-invocations.md b/website/docs/reference/programmatic-invocations.md
index b38d6890fac..ed9102075df 100644
--- a/website/docs/reference/programmatic-invocations.md
+++ b/website/docs/reference/programmatic-invocations.md
@@ -30,7 +30,7 @@ for r in res.result:
- Running simultaneous commands can unexpectedly interact with the data platform. For example, running `dbt run` and `dbt build` for the same models simultaneously could lead to unpredictable results.
- Each `dbt-core` command interacts with global Python variables. To ensure safe operation, commands need to be executed in separate processes, which can be achieved using methods like spawning processes or using tools like Celery.
-To run [safe parallel execution](/reference/dbt-commands#available-commands), you can use the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), both of which does that additional work to manage concurrency (multiple processes) on the your behalf.
+To run [safe parallel execution](/reference/dbt-commands#available-commands), you can use the [dbt Cloud CLI](/docs/cloud/cloud-cli-installation) or [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud), both of which does that additional work to manage concurrency (multiple processes) on your behalf.
## `dbtRunnerResult`
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index 075edc8bc95..2ae314cc8f7 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -103,8 +103,6 @@ as (
-
-
#### Partitioning by an "ingestion" date or timestamp
BigQuery supports an [older mechanism of partitioning](https://cloud.google.com/bigquery/docs/partitioned-tables#ingestion_time) based on the time when each row was ingested. While we recommend using the newer and more ergonomic approach to partitioning whenever possible, for very large datasets, there can be some performance improvements to using this older, more mechanistic approach. [Read more about the `insert_overwrite` incremental strategy below](#copying-ingestion-time-partitions).
@@ -172,8 +170,6 @@ select created_date as _partitiontime, * EXCEPT(created_date) from (
-
-
#### Partitioning with integer buckets
If the `data_type` is specified as `int64`, then a `range` key must also
@@ -594,8 +590,6 @@ with events as (
... rest of model ...
```
-
-
#### Copying partitions
If you are replacing entire partitions in your incremental runs, you can opt to do so with the [copy table API](https://cloud.google.com/bigquery/docs/managing-tables#copy-table) and partition decorators rather than a `merge` statement. While this mechanism doesn't offer the same visibility and ease of debugging as the SQL `merge` statement, it can yield significant savings in time and cost for large datasets because the copy table API does not incur any costs for inserting the data - it's equivalent to the `bq cp` gcloud command line interface (CLI) command.
@@ -639,8 +633,6 @@ from {{ ref('events') }}
-
-
## Controlling table expiration
By default, dbt-created tables never expire. You can configure certain model(s)
@@ -715,7 +707,8 @@ models:
Views with this configuration will be able to select from objects in `project_1.dataset_1` and `project_2.dataset_2`, even when they are located elsewhere and queried by users who do not otherwise have access to `project_1.dataset_1` and `project_2.dataset_2`.
#### Limitations
-Starting in v1.4, `grant_access_to` config _is thread-safe_. In earlier versions, it wasn't safe to use multiple threads for authorizing several views at once with `grant_access_to` for the same dataset. Initially, after adding a new `grant_access_to` setting, you can execute `dbt run` in a single thread. Later runs with the same configuration won't repeat the existing access grants and can use multiple threads.
+
+The `grant_access_to` config _is thread-safe_. In versions prior to v1.4, it wasn't safe to use multiple threads for authorizing several views at once with `grant_access_to` for the same dataset. Initially, after adding a new `grant_access_to` setting, you can execute `dbt run` in a single thread. Later runs with the same configuration won't repeat the existing access grants and can use multiple threads.
diff --git a/website/docs/reference/resource-configs/docs.md b/website/docs/reference/resource-configs/docs.md
index bb0f3714dd4..aa19ae51888 100644
--- a/website/docs/reference/resource-configs/docs.md
+++ b/website/docs/reference/resource-configs/docs.md
@@ -178,8 +178,6 @@ models:
show: false
```
-
-
### Mark a subfolder of models as hidden
**Note:** This can also hide dbt packages.
@@ -203,10 +201,6 @@ models:
-
-
-
-
## Custom node colors
The `docs` attribute now supports `node_color` to customize the display color of some node types in the DAG within dbt docs. You can define node colors in the following files and apply overrides where needed. Note, you need to run or re-run the command `dbt docs generate`.
@@ -342,5 +336,3 @@ models:
```
-
-
diff --git a/website/docs/reference/resource-configs/enabled.md b/website/docs/reference/resource-configs/enabled.md
index 71431b2a9dd..febf1e50c88 100644
--- a/website/docs/reference/resource-configs/enabled.md
+++ b/website/docs/reference/resource-configs/enabled.md
@@ -176,8 +176,6 @@ sources:
-
-
```yaml
@@ -201,14 +199,10 @@ metrics:
-
-
-
-
```yaml
@@ -232,8 +226,6 @@ exposures:
-
-
diff --git a/website/docs/reference/resource-configs/full_refresh.md b/website/docs/reference/resource-configs/full_refresh.md
index 2ef1cc35fa9..26a2364a8c6 100644
--- a/website/docs/reference/resource-configs/full_refresh.md
+++ b/website/docs/reference/resource-configs/full_refresh.md
@@ -68,12 +68,8 @@ Optionally set a resource to always or never full-refresh.
`full_refresh` config will take precedence over the presence or absence of the `--full-refresh` flag.
- If the `full_refresh` config is `none` or omitted, the resource will use the value of the `--full-refresh` flag.
-
-
**Note:** The `--full-refresh` flag also supports a short name, `-f`.
-
-
This logic is encoded in the [`should_full_refresh()`](https://github.com/dbt-labs/dbt-adapters/blob/60005a0a2bd33b61cb65a591bc1604b1b3fd25d5/dbt/include/global_project/macros/materializations/configs.sql) macro.
## Usage
diff --git a/website/docs/reference/resource-configs/group.md b/website/docs/reference/resource-configs/group.md
index 5aa8f91e4b2..7b80d70b6fc 100644
--- a/website/docs/reference/resource-configs/group.md
+++ b/website/docs/reference/resource-configs/group.md
@@ -18,12 +18,6 @@ id: "group"
}>
-
-
-Support for grouping models was added in dbt Core v1.5
-
-
-
@@ -72,12 +66,6 @@ select ...
-
-
-Support for grouping seeds was added in dbt Core v1.5
-
-
-
@@ -107,12 +95,6 @@ seeds:
-
-
-Support for grouping snapshots was added in dbt Core v1.5
-
-
-
@@ -148,12 +130,6 @@ select ...
-
-
-Support for grouping tests was added in dbt Core v1.5
-
-
-
@@ -231,12 +207,6 @@ analyses:
-
-
-Support for grouping metrics was added in dbt Core v1.5
-
-
-
diff --git a/website/docs/reference/resource-configs/singlestore-configs.md b/website/docs/reference/resource-configs/singlestore-configs.md
index b172a549d5f..2317c05c0e2 100644
--- a/website/docs/reference/resource-configs/singlestore-configs.md
+++ b/website/docs/reference/resource-configs/singlestore-configs.md
@@ -3,15 +3,11 @@ title: "SingleStore configurations"
id: "singlestore-configs"
---
-
-
## Incremental materialization strategies
The [`incremental_strategy` config](/docs/build/incremental-models#about-incremental_strategy) controls how dbt builds incremental models. Currently, SingleStoreDB supports only the `delete+insert` configuration.
The `delete+insert` incremental strategy directs dbt to follow a two-step incremental approach. Initially, it identifies and removes the records flagged by the configured `is_incremental()` block. Subsequently, it re-inserts these records.
-
-
## Performance Optimizations
[SingleStore Physical Database Schema Design documentation](https://docs.singlestore.com/managed-service/en/create-a-database/physical-database-schema-design/concepts-of-physical-database-schema-design.html) is helpful if you want to use specific options (that are described below) in your dbt project.
diff --git a/website/docs/reference/resource-configs/snowflake-configs.md b/website/docs/reference/resource-configs/snowflake-configs.md
index d420609a58e..9d70d28eae8 100644
--- a/website/docs/reference/resource-configs/snowflake-configs.md
+++ b/website/docs/reference/resource-configs/snowflake-configs.md
@@ -299,13 +299,11 @@ models:
-
-
## Temporary tables
-Beginning in dbt version 1.3, incremental table merges for Snowflake prefer to utilize a `view` rather than a `temporary table`. The reasoning was to avoid the database write step that a temporary table would initiate and save compile time.
+Incremental table merges for Snowflake prefer to utilize a `view` rather than a `temporary table`. The reasoning is to avoid the database write step that a temporary table would initiate and save compile time.
-However, some situations remain where a temporary table would achieve results faster or more safely. dbt v1.4 adds the `tmp_relation_type` configuration to allow you to opt in to temporary tables for incremental builds. This is defined as part of the model configuration.
+However, some situations remain where a temporary table would achieve results faster or more safely. The `tmp_relation_type` configuration enables you to opt in to temporary tables for incremental builds. This is defined as part of the model configuration.
To guarantee accuracy, an incremental model using the `delete+insert` strategy with a `unique_key` defined requires a temporary table; trying to change this to a view will result in an error.
@@ -340,8 +338,6 @@ In the configuration format for the model SQL file:
-
-
## Dynamic tables
diff --git a/website/docs/reference/resource-properties/config.md b/website/docs/reference/resource-properties/config.md
index ba8f46a4cdb..8190c7dd8ca 100644
--- a/website/docs/reference/resource-properties/config.md
+++ b/website/docs/reference/resource-properties/config.md
@@ -133,8 +133,6 @@ sources:
-
-
```yml
@@ -150,14 +148,10 @@ metrics:
-
-
-
-
```yml
@@ -172,8 +166,6 @@ exposures:
-
-
diff --git a/website/docs/terms/dag.md b/website/docs/terms/dag.md
index c6b91300bfc..0216332d953 100644
--- a/website/docs/terms/dag.md
+++ b/website/docs/terms/dag.md
@@ -55,7 +55,7 @@ You can additionally use your DAG to help identify bottlenecks, long-running dat
- Complex logic stored in views
- Good old large volumes of data
-...to name just a few. Understanding the factors impacting model performance can help you decide on [refactoring approaches](https://courses.getdbt.com/courses/refactoring-sql-for-modularity), [changing model materialization](https://docs.getdbt.com/blog/how-we-shaved-90-minutes-off-model#attempt-2-moving-to-an-incremental-model)s, replacing multiple joins with surrogate keys, or other methods.
+...to name just a few. Understanding the factors impacting model performance can help you decide on [refactoring approaches](https://learn.getdbt.com/courses/refactoring-sql-for-modularity), [changing model materialization](https://docs.getdbt.com/blog/how-we-shaved-90-minutes-off-model#attempt-2-moving-to-an-incremental-model)s, replacing multiple joins with surrogate keys, or other methods.
diff --git a/website/docs/terms/dataframe.md b/website/docs/terms/dataframe.md
index 5e74a4c7668..e91b5d59cf6 100644
--- a/website/docs/terms/dataframe.md
+++ b/website/docs/terms/dataframe.md
@@ -77,7 +77,7 @@ Let’s just say it: there’s a lot of things you can do in Python that could d
In practice, this could look like applying an [IP parser](https://pypi.org/project/ipparser/) to an IP address column, using a package to determine whether a [date falls on a holiday](/docs/build/python-models#using-pypi-packages), or leveraging [numpy](https://numpy.org/) for performant and complex mathematical computations.
:::tip dbt x Python DataFrames
-With v1.3, dbt now supports the use of beta [Python models in dbt](/docs/build/python-models). What does this mean exactly? This means that Python-defined data transformations can be created and used in a dbt project in the same vein as a classic dbt SQL model. These Python models are incredibly new and the team is eagerly looking for feedback in how folks want to use and ritualize them.
+dbt supports the use of beta [Python models in dbt](/docs/build/python-models). What does this mean exactly? This means that Python-defined data transformations can be created and used in a dbt project in the same vein as a classic dbt SQL model. These Python models are incredibly new and the team is eagerly looking for feedback in how folks want to use and ritualize them.
:::
### Manipulation of data during extraction and loading scripts
diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js
index 004e8351827..828223f30cd 100644
--- a/website/docusaurus.config.js
+++ b/website/docusaurus.config.js
@@ -128,7 +128,7 @@ var siteSettings = {
items: [
{
label: "Courses",
- href: "https://courses.getdbt.com",
+ href: "https://learn.getdbt.com",
},
{
label: "Best Practices",
diff --git a/website/sidebars.js b/website/sidebars.js
index a5992389ea1..37d4dae1b7c 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -35,7 +35,7 @@ const sidebarSettings = {
"docs/cloud/about-cloud/browsers",
],
}, // About dbt Cloud directory
- {
+ {
type: "category",
label: "Set up dbt",
collapsed: true,
@@ -197,7 +197,7 @@ const sidebarSettings = {
"docs/core/connect-data-platform/extrica-setup",
"docs/core/connect-data-platform/fal-setup",
"docs/core/connect-data-platform/firebolt-setup",
- "docs/core/connect-data-platform/greenplum-setup",
+ "docs/core/connect-data-platform/greenplum-setup",
"docs/core/connect-data-platform/ibmdb2-setup",
"docs/core/connect-data-platform/infer-setup",
"docs/core/connect-data-platform/iomete-setup",
@@ -241,33 +241,33 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/cloud/cloud-cli-installation" },
items: [
"docs/cloud/cloud-cli-installation",
- "docs/cloud/configure-cloud-cli",
- ],
- },
- {
- type: "category",
- label: "dbt Cloud IDE",
- link: { type: "doc", id: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud" },
- items: [
- "docs/cloud/dbt-cloud-ide/develop-in-the-cloud",
- "docs/cloud/dbt-cloud-ide/keyboard-shortcuts",
- "docs/cloud/dbt-cloud-ide/ide-user-interface",
- "docs/cloud/dbt-cloud-ide/lint-format",
- {
- type: "category",
- label: "dbt Assist",
- link: { type: "doc", id:"docs/cloud/dbt-assist" },
- items: [
- "docs/cloud/dbt-assist",
- "docs/cloud/enable-dbt-assist",
- "docs/cloud/use-dbt-assist",
- "docs/cloud/dbt-assist-data",
- ],
- },
- ],
- },
- ],
- },
+ "docs/cloud/configure-cloud-cli",
+ ],
+ },
+ {
+ type: "category",
+ label: "dbt Cloud IDE",
+ link: { type: "doc", id: "docs/cloud/dbt-cloud-ide/develop-in-the-cloud" },
+ items: [
+ "docs/cloud/dbt-cloud-ide/develop-in-the-cloud",
+ "docs/cloud/dbt-cloud-ide/keyboard-shortcuts",
+ "docs/cloud/dbt-cloud-ide/ide-user-interface",
+ "docs/cloud/dbt-cloud-ide/lint-format",
+ {
+ type: "category",
+ label: "dbt Assist",
+ link: { type: "doc", id: "docs/cloud/dbt-assist" },
+ items: [
+ "docs/cloud/dbt-assist",
+ "docs/cloud/enable-dbt-assist",
+ "docs/cloud/use-dbt-assist",
+ "docs/cloud/dbt-assist-data",
+ ],
+ },
+ ],
+ },
+ ],
+ },
{
type: "category",
label: "Build dbt projects",
@@ -316,7 +316,7 @@ const sidebarSettings = {
link: { type: "doc", id: "docs/build/build-metrics-intro" },
collapsed: true,
items: [
- "docs/build/build-metrics-intro",
+ "docs/build/build-metrics-intro",
{
type: "link",
label: "Quickstart with the dbt Cloud Semantic Layer",
@@ -381,8 +381,8 @@ const sidebarSettings = {
{
type: "category",
label: "Incremental models",
- link: { type: "doc", id: "docs/build/incremental-models-overview",},
- items: [
+ link: { type: "doc", id: "docs/build/incremental-models-overview", },
+ items: [
"docs/build/incremental-models-overview",
"docs/build/incremental-models",
"docs/build/incremental-strategy",
@@ -535,16 +535,6 @@ const sidebarSettings = {
"docs/use-dbt-semantic-layer/exports",
"docs/use-dbt-semantic-layer/sl-cache",
"docs/use-dbt-semantic-layer/sl-faqs",
- {
- type: "category",
- label: "Available integrations",
- link: { type: "doc", id: "docs/use-dbt-semantic-layer/avail-sl-integrations" },
- items: [
- "docs/use-dbt-semantic-layer/avail-sl-integrations",
- "docs/use-dbt-semantic-layer/gsheets",
- "docs/use-dbt-semantic-layer/tableau",
- ],
- },
],
},
{
@@ -628,20 +618,20 @@ const sidebarSettings = {
],
},
{
- type: "category",
- label: "Applied",
- items: [
- "docs/dbt-cloud-apis/discovery-schema-environment-applied-modelHistoricalRuns",
- ],
- },
- // Uncomment to add Definition subpage, but need to make items non-empty
- // {
- // type: "category",
- // label: "Definition",
- // items: [
- // // insert pages here
- // ],
- // },
+ type: "category",
+ label: "Applied",
+ items: [
+ "docs/dbt-cloud-apis/discovery-schema-environment-applied-modelHistoricalRuns",
+ ],
+ },
+ // Uncomment to add Definition subpage, but need to make items non-empty
+ // {
+ // type: "category",
+ // label: "Definition",
+ // items: [
+ // // insert pages here
+ // ],
+ // },
],
},
],
@@ -659,6 +649,35 @@ const sidebarSettings = {
},
],
},
+
+ {
+ type: "category",
+ label: "dbt Cloud integrations",
+ link: { type: "doc", id: "docs/cloud-integrations/overview" },
+ items: [
+ "docs/cloud-integrations/overview",
+ {
+ type: "category",
+ label: "Snowflake Native App",
+ link: { type: "doc", id: "docs/cloud-integrations/snowflake-native-app" },
+ items: [
+ "docs/cloud-integrations/snowflake-native-app",
+ "docs/cloud-integrations/set-up-snowflake-native-app",
+ ],
+ },
+ {
+ type: "category",
+ label: "Semantic Layer integrations",
+ link: { type: "doc", id: "docs/cloud-integrations/avail-sl-integrations" },
+ items: [
+ "docs/cloud-integrations/avail-sl-integrations",
+ "docs/cloud-integrations/semantic-layer/gsheets",
+ "docs/cloud-integrations/semantic-layer/tableau",
+ ],
+ },
+ ],
+ },
+
{
type: "category",
label: "Available dbt versions",
diff --git a/website/snippets/_new-sl-setup.md b/website/snippets/_new-sl-setup.md
index 74273a4c617..bd1410d89c6 100644
--- a/website/snippets/_new-sl-setup.md
+++ b/website/snippets/_new-sl-setup.md
@@ -26,6 +26,6 @@ Now that we've created and successfully run a job in your environment, you're re
-7. Save and copy your environment ID, service token, and host, which you'll need to use in the downstream tools. For more info on how to integrate with partner integrations, refer to [Available integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations).
+7. Save and copy your environment ID, service token, and host, which you'll need to use in the downstream tools. For more info on how to integrate with partner integrations, refer to [Available integrations](/docs/cloud-integrations/avail-sl-integrations).
8. Return to the **Project Details** page and click the **[Generate a Service Token](/docs/dbt-cloud-apis/service-tokens)** button. Make sure it has Semantic Layer Only and Metadata Only permissions. Name the token and save it. Once the token is generated, you won't be able to view this token again so make sure to record it somewhere safe.
diff --git a/website/snippets/_sl-connect-and-query-api.md b/website/snippets/_sl-connect-and-query-api.md
index 187cf221172..43790ba7940 100644
--- a/website/snippets/_sl-connect-and-query-api.md
+++ b/website/snippets/_sl-connect-and-query-api.md
@@ -1,11 +1,11 @@
This section will guide you on how to use the Google Sheets integration. Query metrics using other tools:
-- [First-class integrations](/docs/use-dbt-semantic-layer/avail-sl-integrations) with Tableau, Hex, and more.
+- [First-class integrations](/docs/cloud-integrations/avail-sl-integrations) with Tableau, Hex, and more.
- The [dbt Semantic Layer APIs](/docs/dbt-cloud-apis/sl-api-overview).
- [Exports](/docs/use-dbt-semantic-layer/exports) to expose tables of metrics and dimensions in your data platform and create a custom integration with tools like PowerBI.
To query your metrics using Google Sheets:
1. Make sure you have a [Gmail](http://gmail.com/) account.
-2. To set up Google Sheets and query your metrics, follow the detailed instructions on [Google Sheets integration](/docs/use-dbt-semantic-layer/gsheets).
+2. To set up Google Sheets and query your metrics, follow the detailed instructions on [Google Sheets integration](/docs/cloud-integrations/semantic-layer/gsheets).
3. Start exploring and querying metrics!
- Query a metric, like `order_total`, and filter it with a dimension, like `order_date`.
- You can also use the `group_by` parameter to group your metrics by a specific dimension.
diff --git a/website/snippets/_sl-course.md b/website/snippets/_sl-course.md
index 3ecd5edc1f3..6be9ec7e959 100644
--- a/website/snippets/_sl-course.md
+++ b/website/snippets/_sl-course.md
@@ -1,8 +1,9 @@
📹 Learn about the dbt Semantic Layer with on-demand video courses!
-Explore our [dbt Semantic Layer on-demand course](https://courses.getdbt.com/courses/semantic-layer) to learn how to define and query metrics in your dbt project.
+Explore our [dbt Semantic Layer on-demand course](https://learn.getdbt.com/courses/semantic-layer) to learn how to define and query metrics in your dbt project.
Additionally, dive into mini-courses for querying the dbt Semantic Layer in your favorite tools: [Tableau](https://courses.getdbt.com/courses/tableau-querying-the-semantic-layer), [Hex](https://courses.getdbt.com/courses/hex-querying-the-semantic-layer), and [Mode](https://courses.getdbt.com/courses/mode-querying-the-semantic-layer).
+
diff --git a/website/snippets/_sl-partner-links.md b/website/snippets/_sl-partner-links.md
index 00fbd6145f2..7b94c901456 100644
--- a/website/snippets/_sl-partner-links.md
+++ b/website/snippets/_sl-partner-links.md
@@ -6,13 +6,13 @@ The following tools integrate with the dbt Semantic Layer:
diff --git a/website/src/pages/index.js b/website/src/pages/index.js
index 4550048c38b..f0eef002b8c 100644
--- a/website/src/pages/index.js
+++ b/website/src/pages/index.js
@@ -164,7 +164,7 @@ function Home() {
diff --git a/website/static/img/docs/cloud-integrations/architecture-dbt-snowflake-native-app.png b/website/static/img/docs/cloud-integrations/architecture-dbt-snowflake-native-app.png
new file mode 100644
index 00000000000..0c1ae01ee83
Binary files /dev/null and b/website/static/img/docs/cloud-integrations/architecture-dbt-snowflake-native-app.png differ
diff --git a/website/static/img/docs/cloud-integrations/example-ask-dbt-native-app.png b/website/static/img/docs/cloud-integrations/example-ask-dbt-native-app.png
new file mode 100644
index 00000000000..39dcfbf8e12
Binary files /dev/null and b/website/static/img/docs/cloud-integrations/example-ask-dbt-native-app.png differ
diff --git a/website/static/img/docs/cloud-integrations/example-dbt-snowflake-native-app.png b/website/static/img/docs/cloud-integrations/example-dbt-snowflake-native-app.png
new file mode 100644
index 00000000000..b44516424b1
Binary files /dev/null and b/website/static/img/docs/cloud-integrations/example-dbt-snowflake-native-app.png differ
diff --git a/website/static/img/docs/cloud-integrations/example-snowflake-native-app-service-token.png b/website/static/img/docs/cloud-integrations/example-snowflake-native-app-service-token.png
new file mode 100644
index 00000000000..7e4c7ab99da
Binary files /dev/null and b/website/static/img/docs/cloud-integrations/example-snowflake-native-app-service-token.png differ
diff --git a/website/static/img/docs/cloud-integrations/semantic_layer_configuration.png b/website/static/img/docs/cloud-integrations/semantic_layer_configuration.png
new file mode 100644
index 00000000000..9dc3a7bb6ed
Binary files /dev/null and b/website/static/img/docs/cloud-integrations/semantic_layer_configuration.png differ
diff --git a/website/vercel.json b/website/vercel.json
index 4e2373675b0..d0660bb3dad 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -2,6 +2,21 @@
"cleanUrls": true,
"trailingSlash": false,
"redirects": [
+ {
+ "source": "/docs/use-dbt-semantic-layer/tableau",
+ "destination": "/docs/cloud-integrations/semantic-layer/tableau",
+ "permanent": true
+ },
+ {
+ "source": "/docs/use-dbt-semantic-layer/gsheets",
+ "destination": "/docs/cloud-integrations/semantic-layer/gsheets",
+ "permanent": true
+ },
+ {
+ "source": "/docs/use-dbt-semantic-layer/avail-sl-integrations",
+ "destination": "/docs/cloud-integrations/avail-sl-integrations",
+ "permanent": true
+ },
{
"source": "/reference/resource-properties/unit-tests#unit-testing-versioned-models",
"destination": "/reference/resource-properties/unit-testing-versions",