Skip to content

Commit c06da2b

Browse files
authored
Merge branch 'dbt-labs:current' into current
2 parents bf2f45e + 95dfba3 commit c06da2b

25 files changed

+205
-106
lines changed

website/docs/docs/build/metricflow-time-spine.md

Lines changed: 33 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ tags: [Metrics, Semantic Layer]
77
---
88
<VersionBlock firstVersion="1.9">
99

10-
<!-- this whole section is for 1.9 and higher + Release Tracks -->
1110

1211
It's common in analytics engineering to have a date dimension or "time spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarters, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain.
1312

@@ -23,7 +22,7 @@ To see the generated SQL for the metric and dimension types that use time spine
2322

2423
## Configuring time spine in YAML
2524

26-
Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one.
25+
Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one. If the relevant model file (`util/_models.yml`) doesn't exist, create it and add the configuration mentioned in the [next section](#creating-a-time-spine-table).
2726

2827
Some things to note when configuring time spine models:
2928

@@ -34,9 +33,9 @@ To see the generated SQL for the metric and dimension types that use time spine
3433
- If you're looking to specify the grain of a time dimension so that MetricFlow can transform the underlying column to the required granularity, refer to the [Time granularity documentation](/docs/build/dimensions?dimension=time_gran)
3534

3635
:::tip
37-
If you previously used a model called `metricflow_time_spine`, you no longer need to create this specific model. You can now configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer.
38-
39-
If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model.
36+
- If you previously used a `metricflow_time_spine.sql` model, you can delete it after configuring the `time_spine` property in YAML. The Semantic Layer automatically recognizes the new configuration. No additional `.yml` files are needed.
37+
- You can also configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer.
38+
- If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model.
4039
:::
4140

4241
### Creating a time spine table
@@ -112,9 +111,37 @@ models:
112111

113112
For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example.
114113

114+
### Migrating from SQL to YAML
115+
If your project already includes a time spine (`metricflow_time_spine.sql`), you can migrate its configuration to YAML to address any deprecation warnings you may get.
116+
117+
1. Add the following configuration to a new or existing YAML file using the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. Name the YAML file whatever you want (for example, `util/_models.yml`):
118+
119+
<File name="models/_models.yml">
120+
121+
```yaml
122+
models:
123+
- name: all_days
124+
description: A time spine with one row per day, ranging from 2020-01-01 to 2039-12-31.
125+
time_spine:
126+
standard_granularity_column: date_day # Column for the standard grain of your table
127+
columns:
128+
- name: date_day
129+
granularity: day # Set the granularity of the column
130+
```
131+
</File>
132+
133+
2. After adding the YAML configuration, delete the existing `metricflow_time_spine.sql` file from your project to avoid any issues.
134+
135+
3. Test the configuration to ensure compatibility with your production jobs.
136+
137+
Note that if you're migrating from a `metricflow_time_spine.sql` file:
138+
139+
- Replace its functionality by adding the `time_spine` property to YAML as shown in the previous example.
140+
- Once configured, MetricFlow will recognize the YAML settings, and then the SQL model file can be safely removed.
141+
115142
### Considerations when choosing which granularities to create{#granularity-considerations}
116143

117-
- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month.
144+
- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and `date_trunc` to month.
118145
- You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries.
119146
- We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. For example, if you have dimensions at an hourly grain, you should have a time spine at an hourly grain.
120147

website/docs/docs/build/metrics-overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,7 @@ metrics:
179179
name: active_users
180180
fill_nulls_with: 0
181181
join_to_timespine: true
182+
cumulative_type_params:
182183
window: 7 days
183184
```
184185
</File>

website/docs/docs/cloud/git/git-configuration-in-dbt-cloud.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,4 +43,9 @@ Whether you use a Git integration that natively connects with dbt Cloud or prefe
4343
link="/docs/cloud/git/connect-azure-devops"
4444
icon="dbt-bit"/>
4545

46+
<Card
47+
title="Availability of CI features by Git provider"
48+
body="Learn which Git providers have native support for Continuous Integration workflows"
49+
link="/docs/deploy/continuous-integration#git-providers-who-support-ci"
50+
icon="dbt-bit"/>
4651
</div>

website/docs/docs/cloud/git/import-a-project-by-git-url.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@ In dbt Cloud, you can import a git repository from any valid git URL that points
88
## Git protocols
99
You must use the `git@...` or `ssh:..`. version of your git URL, not the `https://...` version. dbt Cloud uses the SSH protocol to clone repositories, so dbt Cloud will be unable to clone repos supplied with the HTTP protocol.
1010

11+
import GitProvidersCI from '/snippets/_git-providers-supporting-ci.md';
12+
13+
<GitProvidersCI />
14+
1115
## Managing deploy keys
1216

1317
After importing a project by Git URL, dbt Cloud will generate a Deploy Key for your repository. To find the deploy key in dbt Cloud:

website/docs/docs/cloud/manage-access/audit-log.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -102,9 +102,11 @@ The audit log supports various events for different objects in dbt Cloud. You wi
102102
| Invite Added | invite.Added | User invitation added and sent to the user |
103103
| Invite Redeemed | invite.Redeemed | User redeemed invitation |
104104
| User Added to Account | account.UserAdded | New user added to the account |
105-
| User Added to Group | user_group_user.Added | An existing user is added to a group |
106-
| User Removed from Account | account.UserRemoved | User removed from the account
107-
| User Removed from Group | user_group_user.Removed | An existing user is removed from a group |
105+
| User Added to Group | user_group_user.Added | An existing user was added to a group |
106+
| User Removed from Account | account.UserRemoved | User removed from the account |
107+
| User Removed from Group | user_group_user.Removed | An existing user was removed from a group |
108+
| User License Created | user_license.added | A new user license was consumed |
109+
| User License Removed | user_license.removed | A user license was removed from the seat count |
108110
| Verification Email Confirmed | user.jit.email.Confirmed | Email verification confirmed by user |
109111
| Verification Email Sent | user.jit.email.Sent | Email verification sent to user created via JIT |
110112

website/docs/docs/cloud/manage-access/set-up-sso-google-workspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Client Secret for use in dbt Cloud.
5656

5757
6. Save the **Consent screen** settings to navigate back to the **Create OAuth client
5858
id** page.
59-
7. Use the following configuration values when creating your Credentials, replacing `YOUR_ACCESS_URL` and `YOUR_AUTH0_URI`, which need to be replaced with the [appropriate Access URL and Auth0 URI](/docs/cloud/manage-access/sso-overview#auth0-multi-tenant-uris) for your region and plan.
59+
7. Use the following configuration values when creating your Credentials, replacing `YOUR_ACCESS_URL` and `YOUR_AUTH0_URI`, which need to be replaced with the appropriate Access URL and Auth0 URI from your [account settings](/docs/cloud/manage-access/sso-overview#auth0-uris).
6060

6161
| Config | Value |
6262
| ------ | ----- |

website/docs/docs/cloud/manage-access/set-up-sso-microsoft-entra-id.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Log into the Azure portal for your organization. Using the [**Microsoft Entra ID
3737
| **Name** | dbt Cloud |
3838
| **Supported account types** | Accounts in this organizational directory only _(single tenant)_ |
3939

40-
4. Configure the **Redirect URI**. The table below shows the appropriate Redirect URI values for single-tenant and multi-tenant deployments. For most enterprise use-cases, you will want to use the single-tenant Redirect URI. Replace `YOUR_AUTH0_URI` with the [appropriate Auth0 URI](/docs/cloud/manage-access/sso-overview#auth0-multi-tenant-uris) for your region and plan.
40+
4. Configure the **Redirect URI**. The table below shows the appropriate Redirect URI values for single-tenant and multi-tenant deployments. For most enterprise use-cases, you will want to use the single-tenant Redirect URI. Replace `YOUR_AUTH0_URI` with the [appropriate Auth0 URI](/docs/cloud/manage-access/sso-overview#auth0-uris) for your region and plan.
4141

4242
| Application Type | Redirect URI |
4343
| ----- | ----- |
@@ -138,7 +138,7 @@ To complete setup, follow the steps below in the dbt Cloud application.
138138
| **Client&nbsp;Secret** | Paste the **Client Secret** (remember to use the Secret Value instead of the Secret ID) from the steps above; <br />**Note:** When the client secret expires, an Entra ID admin will have to generate a new one to be pasted into dbt Cloud for uninterrupted application access. |
139139
| **Tenant&nbsp;ID** | Paste the **Directory (tenant ID)** recorded in the steps above |
140140
| **Domain** | Enter the domain name for your Azure directory (such as `fishtownanalytics.com`). Only use the primary domain; this won't block access for other domains. |
141-
| **Slug** | Enter your desired login slug. Users will be able to log into dbt Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/manage-access/sso-overview#auth0-multi-tenant-uris) for your region and plan. Login slugs must be unique across all dbt Cloud accounts, so pick a slug that uniquely identifies your company. |
141+
| **Slug** | Enter your desired login slug. Users will be able to log into dbt Cloud by navigating to `https://YOUR_ACCESS_URL/enterprise-login/LOGIN-SLUG`, replacing `YOUR_ACCESS_URL` with the [appropriate Access URL](/docs/cloud/manage-access/sso-overview#auth0-uris) for your region and plan. Login slugs must be unique across all dbt Cloud accounts, so pick a slug that uniquely identifies your company. |
142142

143143
<Lightbox src="/img/docs/dbt-cloud/dbt-cloud-enterprise/azure/azure-cloud-sso.png" title="Configuring Entra ID AD SSO in dbt Cloud" />
144144

website/docs/docs/cloud/manage-access/set-up-sso-okta.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -75,16 +75,16 @@ so pick a slug that uniquely identifies your company.
7575
* **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=<login slug>`
7676
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:{login slug}`
7777
* **Relay State**: `<login slug>`
78+
* **Name ID format**: `Unspecified`
79+
* **Application username**: `Custom` / `user.getInternalProperty("id")`
80+
* **Update Application username on**: `Create and update`
7881

7982
<Lightbox
8083
collapsed={false}
8184
src="/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png"
8285
title="Configure the app's SAML Settings"
8386
/>
8487

85-
<!-- TODO : Will users need to change the Name ID format and Application
86-
username on this screen? -->
87-
8888
Use the **Attribute Statements** and **Group Attribute Statements** forms to
8989
map your organization's Okta User and Group Attributes to the format that
9090
dbt Cloud expects.

website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md

Lines changed: 17 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,9 @@ Additionally, you may configure the IdP attributes passed from your identity pro
5959
| email | Unspecified | user.email | The user's email address |
6060
| first_name | Unspecified | user.first_name | The user's first name |
6161
| last_name | Unspecified | user.last_name | The user's last name |
62-
| NameID (if applicable) | Unspecified | user.email | The user's email address |
62+
| NameID | Unspecified | ID | The user's unchanging ID |
63+
64+
`NameID` values can be persistent (`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`) rather than unspecified if your IdP supports these values. Using an email address for `NameID` will work, but dbt Cloud creates an entirely new user if that email address changes. Configuring a value that will not change, even if the user's email address does, is a best practice.
6365

6466
dbt Cloud's [role-based access control](/docs/cloud/manage-access/about-user-access#role-based-access-control) relies
6567
on group mappings from the IdP to assign dbt Cloud users to dbt Cloud groups. To
@@ -144,6 +146,9 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un
144146
* **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=<login slug>`
145147
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:<login slug>`
146148
* **Relay State**: `<login slug>`
149+
* **Name ID format**: `Unspecified`
150+
* **Application username**: `Custom` / `user.getInternalProperty("id")`
151+
* **Update Application username on**: `Create and update`
147152

148153
<Lightbox collapsed={false} src="/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png" title="Configure the app's SAML Settings"/>
149154

@@ -245,7 +250,7 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un
245250
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:<login slug>`
246251
- **Start URL**: `<login slug>`
247252
5. Select the **Signed response** checkbox.
248-
6. The default **Name ID** is the primary email. Multi-value input is not supported.
253+
6. The default **Name ID** is the primary email. Multi-value input is not supported. If your user profile has a unique, stable value that will persist across email address changes, it's best to use that; otherwise, email will work.
249254
7. Use the **Attribute mapping** page to map your organization's Google Directory Attributes to the format that
250255
dbt Cloud expects.
251256
8. Click **Add another mapping** to map additional attributes.
@@ -329,20 +334,22 @@ Follow these steps to set up single sign-on (SSO) with dbt Cloud:
329334
From the Set up Single Sign-On with SAML page:
330335

331336
1. Click **Edit** in the User Attributes & Claims section.
332-
2. Leave the claim under "Required claim" as is.
333-
3. Delete all claims under "Additional claims."
334-
4. Click **Add new claim** and add these three new claims:
337+
2. Click **Unique User Identifier (Name ID)** under **Required claim.**
338+
3. Set **Name identifier format** to **Unspecified**.
339+
4. Set **Source attribute** to **user.objectid**.
340+
5. Delete all claims under **Additional claims.**
341+
6. Click **Add new claim** and add the following new claims:
335342

336343
| Name | Source attribute |
337344
| ----- | ----- |
338345
| **email** | user.mail |
339346
| **first_name** | user.givenname |
340347
| **last_name** | user.surname |
341348

342-
5. Click **Add a group claim** from User Attributes and Claims.
343-
6. If you'll assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**.
344-
7. Set **Source attribute** to **Group ID**.
345-
8. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**.
349+
7. Click **Add a group claim** from **User Attributes and Claims.**
350+
8. If you assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**.
351+
9. Set **Source attribute** to **Group ID**.
352+
10. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**.
346353

347354
**Note:** Keep in mind that the Group ID in Entra ID maps to that group's GUID. It should be specified in lowercase for the mappings to work as expected. The Source Attribute field alternatively can be set to a different value of your preference.
348355

@@ -386,7 +393,7 @@ We recommend using the following values:
386393

387394
| name | name format | value |
388395
| ---- | ----------- | ----- |
389-
| NameID | Unspecified | Email |
396+
| NameID | Unspecified | OneLogin ID |
390397
| email | Unspecified | Email |
391398
| first_name | Unspecified | First Name |
392399
| last_name | Unspecified | Last Name |

website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,8 @@ You can read more about each of these behavior changes in the following links:
103103

104104
### Snowflake
105105

106-
- Iceberg Table Format support will be available on three out-of-the-box materializations: table, incremental, dynamic tables.
106+
- Iceberg Table Format &mdash; Support will be available on three out-of-the-box materializations: table, incremental, dynamic tables.
107+
- Breaking change &mdash; When upgrading from dbt 1.8 to 1.9 `{{ target.account }}` replaces underscores with dashes. For example, if the `target.account` is set to `sample_company`, then the compiled code now generates `sample-company`. [Refer to the `dbt-snowflake` issue](https://github.com/dbt-labs/dbt-snowflake/issues/1286) for more information.
107108

108109
### Bigquery
109110

0 commit comments

Comments
 (0)