From b61d83a5837274a522d35126610ebeb271e1d957 Mon Sep 17 00:00:00 2001 From: "Talla, Mohan" Date: Tue, 3 Dec 2024 14:33:32 +0530 Subject: [PATCH 01/53] Updated setup and config pages --- .../connect-data-platform/teradata-setup.md | 20 ++++++++-- .../resource-configs/teradata-configs.md | 37 ++++++++++++------- 2 files changed, 41 insertions(+), 16 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index f4ffbe37f35..6c9d2f46a41 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -95,7 +95,6 @@ Parameter | Default | Type | Description `browser_tab_timeout` | `"5"` | quoted integer | Specifies the number of seconds to wait before closing the browser tab after Browser Authentication is completed. The default is 5 seconds. The behavior is under the browser's control, and not all browsers support automatic closing of browser tabs. `browser_timeout` | `"180"` | quoted integer | Specifies the number of seconds that the driver will wait for Browser Authentication to complete. The default is 180 seconds (3 minutes). `column_name` | `"false"` | quoted boolean | Controls the behavior of cursor `.description` sequence `name` items. Equivalent to the Teradata JDBC Driver `COLUMN_NAME` connection parameter. False specifies that a cursor `.description` sequence `name` item provides the AS-clause name if available, or the column name if available, or the column title. True specifies that a cursor `.description` sequence `name` item provides the column name if available, but has no effect when StatementInfo parcel support is unavailable. -`connect_failure_ttl` | `"0"` | quoted integer | Specifies the time-to-live in seconds to remember the most recent connection failure for each IP address/port combination. The driver subsequently skips connection attempts to that IP address/port for the duration of the time-to-live. The default value of zero disables this feature. The recommended value is half the database restart time. Equivalent to the Teradata JDBC Driver `CONNECT_FAILURE_TTL` connection parameter. `connect_timeout` | `"10000"` | quoted integer | Specifies the timeout in milliseconds for establishing a TCP socket connection. Specify 0 for no timeout. The default is 10 seconds (10000 milliseconds). `cop` | `"true"` | quoted boolean | Specifies whether COP Discovery is performed. Equivalent to the Teradata JDBC Driver `COP` connection parameter. `coplast` | `"false"` | quoted boolean | Specifies how COP Discovery determines the last COP hostname. Equivalent to the Teradata JDBC Driver `COPLAST` connection parameter. When `coplast` is `false` or omitted, or COP Discovery is turned off, then no DNS lookup occurs for the coplast hostname. When `coplast` is `true`, and COP Discovery is turned on, then a DNS lookup occurs for a coplast hostname. @@ -110,7 +109,7 @@ Parameter | Default | Type | Description `log` | `"0"` | quoted integer | Controls debug logging. Somewhat equivalent to the Teradata JDBC Driver `LOG` connection parameter. This parameter's behavior is subject to change in the future. This parameter's value is currently defined as an integer in which the 1-bit governs function and method tracing, the 2-bit governs debug logging, the 4-bit governs transmit and receive message hex dumps, and the 8-bit governs timing. Compose the value by adding together 1, 2, 4, and/or 8. `logdata` | | string | Specifies extra data for the chosen logon authentication method. Equivalent to the Teradata JDBC Driver `LOGDATA` connection parameter. `logon_timeout` | `"0"` | quoted integer | Specifies the logon timeout in seconds. Zero means no timeout. -`logmech` | `"TD2"` | string | Specifies the logon authentication method. Equivalent to the Teradata JDBC Driver `LOGMECH` connection parameter. Possible values are `TD2` (the default), `JWT`, `LDAP`, `KRB5` for Kerberos, or `TDNEGO`. +`logmech` | `"TD2"` | string | Specifies the logon authentication method. Equivalent to the Teradata JDBC Driver `LOGMECH` connection parameter. Possible values are `TD2` (the default), `JWT`, `LDAP`, `BROWSER`, `KRB5` for Kerberos, or `TDNEGO`. `max_message_body` | `"2097000"` | quoted integer | Specifies the maximum Response Message size in bytes. Equivalent to the Teradata JDBC Driver `MAX_MESSAGE_BODY` connection parameter. `partition` | `"DBC/SQL"` | string | Specifies the database partition. Equivalent to the Teradata JDBC Driver `PARTITION` connection parameter. `request_timeout` | `"0"` | quoted integer | Specifies the timeout for executing each SQL request. Zero means no timeout. @@ -210,7 +209,8 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used, ##### hash - `Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF): + `Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF) and optionally specify `md5_udf` [variable](https://docs.getdbt.com/docs/build/project-variables).
+ If not specified the code defaults to using `GLOBAL_FUNCTIONS.hash_md5`. See below instructions on how to install the custom UDF: 1. Download the md5 UDF implementation from Teradata (registration required): https://downloads.teradata.com/download/extensibility/md5-message-digest-udf. 1. Unzip the package and go to `src` directory. 1. Start up `bteq` and connect to your database. @@ -228,6 +228,12 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used, ```sql GRANT EXECUTE FUNCTION ON GLOBAL_FUNCTIONS TO PUBLIC WITH GRANT OPTION; ``` + Instruction on how to add md5_udf variable in dbt_project.yml for custom hash function: + ```yaml + vars: + md5_udf: Custom_database_name.hash_method_function + ``` + ##### last_day `last_day` in `teradata_utils`, unlike the corresponding macro in `dbt_utils`, doesn't support `quarter` datepart. @@ -241,6 +247,14 @@ dbt-teradata 1.8.0 and later versions support unit tests, enabling you to valida ## Limitations +### Browser Authentication +When running a dbt job with logmech set to "browser", the initial authentication opens a browser window where you must enter your username and password.
+After authentication, this window remains open, requiring you to manually switch back to the dbt console.
+For every subsequent connection, a new browser tab briefly opens, displaying the message "TERADATA BROWSER AUTHENTICATION COMPLETED," and silently reuses the existing session.
+However, the focus stays on the browser window, so you’ll need to manually switch back to the dbt console each time.
+This behavior is the default functionality of the teradatasql driver and cannot be avoided at this time.
+To prevent session expiration and the need to re-enter credentials, ensure the authentication browser window stays open until the job is complete. + ### Transaction mode Both ANSI and TERA modes are now supported in dbt-teradata. TERA mode's support is introduced with dbt-teradata 1.7.1, it is an initial implementation. diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 89a2ff76fba..08b442e5b62 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -348,6 +348,18 @@ If a user sets some key-value pair with value as `'{model}'`, internally this `' - For example, if the model the user is running is `stg_orders`, `{model}` will be replaced with `stg_orders` in runtime. - If no `query_band` is set by the user, the default query_band used will be: ```org=teradata-internal-telem;appname=dbt;``` +## Unit Testing +* Unit testing is supported in dbt-teradata, allowing users to write and execute unit tests using the dbt test command. + * For detailed guidance, refer to the dbt documentation. + +* QVCI must be enabled in the database to run unit tests for views. + * Additional details on enabling QVCI can be found in the General section. + * Without QVCI enabled, unit test support for views will be limited. + * Users might encounter the following database error when testing views without QVCI enabled: + ``` + * [Teradata Database] [Error 3706] Syntax error: Data Type "N" does not match a Defined Type name. + ``` + ## valid_history incremental materialization strategy _This is available in early access_ @@ -361,26 +373,27 @@ In temporal databases, valid time is crucial for applications like historical re unique_key='id', on_schema_change='fail', incremental_strategy='valid_history', - valid_from='valid_from_column', - history_column_in_target='history_period_column' + valid_period='valid_period_col', + use_valid_to_time='no', ) }} ``` The `valid_history` incremental strategy requires the following parameters: -* `valid_from` — Column in the source table of **timestamp** datatype indicating when each record became valid. -* `history_column_in_target` — Column in the target table of **period** datatype that tracks history. +* `unique_key`: The primary key of the model (excluding the valid time components), specified as a column name or list of column names. +* `valid_period`: Name of the model column indicating the period for which the record is considered to be valid. The datatype must be `PERIOD(DATE)` or `PERIOD(TIMESTAMP)`. +* `use_valid_to_time`: Wether the end bound value of the valid period in the input is considered by the strategy when building the valid timeline. Use 'no' if you consider your record to be valid until changed (and supply any value greater to the begin bound for the end bound of the period - a typical convention is `9999-12-31` of ``9999-12-31 23:59:59.999999`). Use 'yes' if you know until when the record is valid (typically this is a correction in the history timeline). The valid_history strategy in dbt-teradata involves several critical steps to ensure the integrity and accuracy of historical data management: * Remove duplicates and conflicting values from the source data: * This step ensures that the data is clean and ready for further processing by eliminating any redundant or conflicting records. - * The process of removing duplicates and conflicting values from the source data involves using a ranking mechanism to ensure that only the highest-priority records are retained. This is accomplished using the SQL RANK() function. -* Identify and adjust overlapping time slices: - * Overlapping time periods in the data are detected and corrected to maintain a consistent and non-overlapping timeline. -* Manage records needing to be overwritten or split based on the source and target data: + * The process of removing primary key duplicates (ie. two or more records with the same value for the `unique_key` and BEGIN() bond of the `valid_period` fields) in the dataset produced by the model. If such duplicates exist, the row with the lowest value is retained for all non-primary-key fields (in the order specified in the model) is retained. Full-row duplicates are always de-duplicated. +* Identify and adjust overlapping time slices (if use_valid_to_time='yes): + * Overlapping time periods in the data are corrected to maintain a consistent and non-overlapping timeline. To do so, the valid period end bound of a record is adjusted to meet the begin bound of the next record with the same `unique_key` value and overlapping `valid_period` value if any. +* Manage records needing to be adjusted, deleted or split based on the source and target data: * This involves handling scenarios where records in the source data overlap with or need to replace records in the target data, ensuring that the historical timeline remains accurate. -* Utilize the TD_NORMALIZE_MEET function to compact history: - * This function helps to normalize and compact the history by merging adjacent time periods, improving the efficiency and performance of the database. +* Compact history: + * Normalize and compact the history by merging records of adjacent time periods withe same value, optimizing database storage and performance. We use the function TD_NORMALIZE_MEET for this purpose. * Delete existing overlapping records from the target table: * Before inserting new or updated records, any existing records in the target table that overlap with the new data are removed to prevent conflicts. * Insert the processed data into the target table: @@ -416,9 +429,7 @@ These steps collectively ensure that the valid_history strategy effectively mana ``` -:::info -The target table must already exist before running the model. Ensure the target table is created and properly structured with the necessary columns, including a column that tracks the history with period datatype, before running a dbt model. -::: + ## Common Teradata-specific tasks * *collect statistics* - when a table is created or modified significantly, there might be a need to tell Teradata to collect statistics for the optimizer. It can be done using `COLLECT STATISTICS` command. You can perform this step using dbt's `post-hooks`, e.g.: From 53e9f07179b3ef18e507030942ae34b89dd2da91 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Tue, 10 Dec 2024 07:03:46 +0530 Subject: [PATCH 02/53] Update website/docs/docs/core/connect-data-platform/teradata-setup.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/core/connect-data-platform/teradata-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 6c9d2f46a41..774f5b5e070 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -209,7 +209,7 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used, ##### hash - `Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF) and optionally specify `md5_udf` [variable](https://docs.getdbt.com/docs/build/project-variables).
+ `Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF) and optionally specify `md5_udf` [variable](/docs/build/project-variables).
If not specified the code defaults to using `GLOBAL_FUNCTIONS.hash_md5`. See below instructions on how to install the custom UDF: 1. Download the md5 UDF implementation from Teradata (registration required): https://downloads.teradata.com/download/extensibility/md5-message-digest-udf. 1. Unzip the package and go to `src` directory. From 5b17cf4192425993e66035bdb0e5dd2b6ffe2370 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Tue, 10 Dec 2024 07:03:56 +0530 Subject: [PATCH 03/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 08b442e5b62..8debd1c79ae 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -348,7 +348,7 @@ If a user sets some key-value pair with value as `'{model}'`, internally this `' - For example, if the model the user is running is `stg_orders`, `{model}` will be replaced with `stg_orders` in runtime. - If no `query_band` is set by the user, the default query_band used will be: ```org=teradata-internal-telem;appname=dbt;``` -## Unit Testing +## Unit testing * Unit testing is supported in dbt-teradata, allowing users to write and execute unit tests using the dbt test command. * For detailed guidance, refer to the dbt documentation. From 6b11480b27e4a344cb61f79bbb2b1689371deb1e Mon Sep 17 00:00:00 2001 From: KNagaVivek <79193329+KNagaVivek@users.noreply.github.com> Date: Wed, 8 Jan 2025 17:34:51 +0530 Subject: [PATCH 04/53] Create watsonx-presto-config.md --- .../resource-configs/watsonx-presto-config.md | 113 ++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 website/docs/reference/resource-configs/watsonx-presto-config.md diff --git a/website/docs/reference/resource-configs/watsonx-presto-config.md b/website/docs/reference/resource-configs/watsonx-presto-config.md new file mode 100644 index 00000000000..42111a2fb47 --- /dev/null +++ b/website/docs/reference/resource-configs/watsonx-presto-config.md @@ -0,0 +1,113 @@ +--- +title: "IBM watsonx.data Presto configurations" +id: "watsonx-presto-configs" +--- + +## Instance requirements + +To use IBM watsonx.data Presto(java) with dbt, ensure the instance has an attached catalog that allows creating, renaming, altering, and dropping objects such as tables and views. The user connecting to the instance with dbt must have equivalent permissions for the target catalog. + +## Session properties + +With IBM watsonx.data SaaS/Software, or Presto instance, you can [set session properties](https://prestodb.io/docs/current/sql/set-session.html) to modify the current configuration for your user session. + +To temporarily adjust session properties for a specific dbt model or a group of models, use a [dbt hook](/reference/resource-configs/pre-hook-post-hook). For example: + +```sql +{{ + config( + pre_hook="set session query_max_run_time='10m'" + ) +}} +``` + +## Connector properties + +IBM watsonx.data SaaS/Software and Presto support various connector properties to manage how your data is represented. These properties are particularly useful for file-based connectors like Hive. + +For information on what is supported for each data source, refer to one of the following resources: +- [Presto Connectors](https://prestodb.io/docs/current/connector.html) +- [watsonx.data SaaS Catalog](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-reg_database) +- [watsonx.data Software Catalog](https://www.ibm.com/docs/en/watsonx/watsonxdata/1.1.x?topic=components-adding-database-catalog-pair) + + +### Hive catalogs + +When using the Hive connector, ensure the following settings are configured. These settings are crucial for enabling frequently executed operations like `DROP` and `RENAME` in dbt: + +```java +hive.metastore-cache-ttl=0s +hive.metastore-refresh-interval=5s +hive.allow-drop-table=true +hive.allow-rename-table=true + +``` + +## File format configuration + +For file-based connectors, such as Hive, you can customize table materialization and data formats. For example, to create a partitioned [Parquet](https://spark.apache.org/docs/latest/sql-data-sources-parquet.html) table: + +```sql +{{ + config( + materialized='table', + properties={ + "format": "'PARQUET'", + "partitioning": "ARRAY['bucket(id, 2)']", + } + ) +}} +``` + +## Seeds and prepared statements +The `dbt-watsonx-presto` adapter offers comprehensive support for all [Presto datatypes](https://prestodb.io/docs/current/language/types.html) and [watsonx.data Presto datatypes](https://www.ibm.com/support/pages/node/7157339) in seed files. However, to utilize this feature, you need to explicitly define the data types for each column in the `dbt_project.yml` file. + +To configure column data types, update your `/dbt_project.yml` file as follows: + +```sh +seeds: + : + : + +column_types: + : + : +``` +This ensures that dbt correctly interprets and applies the specified data types when loading seed data into your watsonx.data Presto instances. + + +## Materializations +### Table + +The `dbt-watsonx-presto` adapter helps you create and update tables through table materialization, making it easier to work with data in watsonx.data Presto. + +#### Recommendations +- **Check Permissions:** Ensure that the necessary permissions for table creation are enabled in the catalog or schema. +- **Check Connector Documentation:** Review Presto [connector’s documentation](https://prestodb.io/docs/current/connector.html) or watsonx.data Presto [sql statement support](https://www.ibm.com/support/pages/node/7157339) to ensure it supports table creation and modification. + +#### Limitations with Some Connectors +Certain watsonx.data Presto connectors, particularly read-only ones or those with restricted permissions, do not allow creating or modifying tables. If you attempt to use table materialization with these connectors, you may encounter an error like: + +```sh +PrestoUserError(type=USER_ERROR, name=NOT_SUPPORTED, message="This connector does not support creating tables with data", query_id=20241206_071536_00026_am48r) +``` + +### View + +The `dbt-watsonx-presto` adapter supports creating views using the `materialized='view'` configuration in your dbt model. By default, when you set the materialization to view, it creates a view in watsonx.data Presto. + +```sql +{{ + config( + materialized='view', + ) +}} +``` + +For more details, refer to the watsonx.data [sql statement support](https://www.ibm.com/support/pages/node/7157339) or Presto [connector documentation](https://prestodb.io/docs/current/connector.html) to verify whether your connector supports view creation. + + +### Unsupported Features +The following features are not supported by the `dbt-watsonx-presto` adapter +- Incremental Materialization +- Materialized Views +- Snapshots From f0a76761fdae341c7ad6ee0e796ed241f8d86432 Mon Sep 17 00:00:00 2001 From: KNagaVivek <79193329+KNagaVivek@users.noreply.github.com> Date: Wed, 8 Jan 2025 17:47:46 +0530 Subject: [PATCH 05/53] Create watsonx-presto-setup.md --- .../watsonx-presto-setup.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md new file mode 100644 index 00000000000..6bf4ca61a2b --- /dev/null +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -0,0 +1,105 @@ +--- +title: "IBM watsonx.data Presto setup" +description: "Read this guide to learn about the IBM watsonx.data Presto setup in dbt." +id: "watsonx-presto setup" +meta: + maintained_by: IBM + authors: Karnati Naga Vivek, Hariharan Ashokan, Biju Palliyath, Gopikrishnan Varadarajulu, Rohan Pednekar + github_repo: 'IBM/dbt-watsonx-presto' + pypi_package: 'dbt-watsonx-presto' + min_core_version: v1.8.0 + cloud_support: 'Not Supported' + min_supported_version: 'n/a' + slack_channel_name: + slack_channel_link: + platform_name: IBM watsonx.data + config_page: /reference/resource-configs/watsonx-presto-config +--- + +The dbt-watsonx-presto adapter allows you to use dbt to transform and manage data on IBM watsonx.data Presto(Java), leveraging its distributed SQL query engine capabilities. The configuration and connection setup described here are also applicable to open-source Presto. Before proceeding, ensure you have the following: +
    +
  • An active IBM watsonx.data Presto(Java) Engine with connection details (host, port, catalog, schema) in SaaS/Software.
  • +
  • Authentication Credentials: Username and password/apikey.
  • +
  • For watsonx.data instances, SSL verification is required for secure connections. If the instance host uses HTTPS, there is no need to specify the SSL certificate parameter. However, if the instance host uses an unsecured HTTP connection, ensure you provide the path to the SSL certificate file.
  • +
+Refer to the Configuring dbt-watsonx-presto section for guidance on obtaining and organizing these details. + + + + +import SetUpPages from '/snippets/_setup-pages-intro.md'; + + + + +## Connecting to IBM watsonx.data Presto + +To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and Software instances: + + + +```yaml +my_project: + outputs: + software: + type: presto + method: BasicAuth + user: [user] + password: [password] + host: [hostname] + database: [database name] + schema: [your dbt schema] + port: [port number] + threads: [1 or more] + ssl_verify: path/to/certificate + + saas: + type: presto + method: BasicAuth + user: [user] + password: [api_key] + host: [hostname] + database: [database name] + schema: [your dbt schema] + port: [port number] + threads: [1 or more] + + target: software + +``` + + + +## Host parameters + +The following profile fields are required for configuring watsonx.data Presto(java) connections. Currently, it supports only the `BasicAuth` authentication method. For IBM watsonx.data SaaS or Software instances, You can get the hostname and port details by clicking View connect details inside the Presto(java) engine details page. + +| Option | Required/Optional | Description | Example | +| --------- | ------- | ------- | ----------- | +| `method` | Required (default value is none) | Authentication method for Presto | `None` or `BasicAuth` | +| `user` | Required | Username or email for authentication. | `user` | +| `password`| Required (if `method` is `BasicAuth`) | Password or API key for authentication | `password` | +| `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | +| `database`| Required | The catalog name in your presto instance. | `Analytics` | +| `schema` | Required | The schema name within your presto instance catalog. | `my_schema` | +| `port` | Required | Port for connecting to Presto. | `443` | +| ssl_verify | Optional (default: **true**) | Specifies the path to the SSL certificate or a boolean value. The SSL certificate path is required if the watsonx.data instance is not secure (HTTP).| `path/to/certificate` or `true` | + + +### Schemas and databases +When selecting the catalog and the schema, make sure the user has read and write access to both. This selection does not limit your ability to query the catalog. Instead, they serve as the default location for where tables and views are materialized. In addition, the Presto connector used in the catalog must support creating tables. This default can be changed later from within your dbt project. + +### SSL Verification +- If the Presto instance uses an unsecured HTTP connection, you must set `ssl_verify` to the path of the SSL certificate file. +- If the instance uses `HTTPS`, this parameter is not required and can be omitted. + +## Additional parameters + +The following profile fields are optional to set up. They let you configure your instance session and dbt for your connection. + + +| Profile field | Description | Example | +| ----------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------ | +| `threads` | How many threads dbt should use (default is `1`) | `8` | +| `http_headers` | HTTP headers to send alongside requests to Presto, specified as a yaml dictionary of (header, value) pairs. | `X-Presto-Routing-Group: my-instance` | +| `http_scheme` | The HTTP scheme to use for requests to (default: `http`, or `https` if `BasicAuth`) | `https` or `http` | From cbc72cbd89ef36bc8595c834b24c3aa26f2525b3 Mon Sep 17 00:00:00 2001 From: KNagaVivek <79193329+KNagaVivek@users.noreply.github.com> Date: Thu, 9 Jan 2025 09:41:50 +0530 Subject: [PATCH 06/53] Update sidebars --- website/sidebars.js | 2 ++ 1 file changed, 2 insertions(+) diff --git a/website/sidebars.js b/website/sidebars.js index 3a8f560c297..00850689b31 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -252,6 +252,7 @@ const sidebarSettings = { "docs/core/connect-data-platform/tidb-setup", "docs/core/connect-data-platform/upsolver-setup", "docs/core/connect-data-platform/vertica-setup", + "docs/core/connect-data-platform/watsonx-presto-setup", "docs/core/connect-data-platform/yellowbrick-setup", ], }, @@ -897,6 +898,7 @@ const sidebarSettings = { "reference/resource-configs/teradata-configs", "reference/resource-configs/upsolver-configs", "reference/resource-configs/vertica-configs", + "reference/resource-configs/watsonx-presto-config", "reference/resource-configs/yellowbrick-configs", ], }, From e50f831c69c22f8dbb772f5908b3ae76d8d0c6a5 Mon Sep 17 00:00:00 2001 From: KNagaVivek <79193329+KNagaVivek@users.noreply.github.com> Date: Sat, 11 Jan 2025 09:52:04 +0530 Subject: [PATCH 07/53] Update watsonx-presto-setup --- .../core/connect-data-platform/watsonx-presto-setup.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 6bf4ca61a2b..957a22509ad 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -72,11 +72,15 @@ my_project: ## Host parameters -The following profile fields are required for configuring watsonx.data Presto(java) connections. Currently, it supports only the `BasicAuth` authentication method. For IBM watsonx.data SaaS or Software instances, You can get the hostname and port details by clicking View connect details inside the Presto(java) engine details page. +The following profile fields are required to configure watsonx.data Presto(java) connections. The `method` field determines the authentication type used for the connection: +1. **`none`** : If not specified, the `method` field defaults to `none`, which is used for unauthenticated connections (e.g., for local development of OSS Presto instances). +2. **`BasicAuth`** : For secure connections (e.g., to IBM watsonx.data SaaS or Software instances), you must explicitly set `method: BasicAuth` and provide the corresponding `user` and `password` fields. + +For IBM watsonx.data SaaS or Software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page. | Option | Required/Optional | Description | Example | | --------- | ------- | ------- | ----------- | -| `method` | Required (default value is none) | Authentication method for Presto | `None` or `BasicAuth` | +| `method` | Required (default value is none) | Specifies the authentication method for Presto. Use `none` for unauthenticated connections or `BasicAuth` for secure connections. | `None` or `BasicAuth` | | `user` | Required | Username or email for authentication. | `user` | | `password`| Required (if `method` is `BasicAuth`) | Password or API key for authentication | `password` | | `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | From f4ad42fbbc64df9aef4562c8c1eb88b0b21efed7 Mon Sep 17 00:00:00 2001 From: KNagaVivek Date: Wed, 15 Jan 2025 09:25:32 +0530 Subject: [PATCH 08/53] Update watsonx-presto-config --- .../resource-configs/watsonx-presto-config.md | 71 ++++++++++++------- 1 file changed, 44 insertions(+), 27 deletions(-) diff --git a/website/docs/reference/resource-configs/watsonx-presto-config.md b/website/docs/reference/resource-configs/watsonx-presto-config.md index 42111a2fb47..6415a2b6058 100644 --- a/website/docs/reference/resource-configs/watsonx-presto-config.md +++ b/website/docs/reference/resource-configs/watsonx-presto-config.md @@ -5,7 +5,13 @@ id: "watsonx-presto-configs" ## Instance requirements -To use IBM watsonx.data Presto(java) with dbt, ensure the instance has an attached catalog that allows creating, renaming, altering, and dropping objects such as tables and views. The user connecting to the instance with dbt must have equivalent permissions for the target catalog. +To use IBM watsonx.data Presto(java) with `dbt-watsonx-presto` adapter, ensure the instance has an attached catalog that supports creating, renaming, altering, and dropping objects such as tables and views. The user connecting to the instance via the `dbt-watsonx-presto` adapter must have the necessary permissions for the target catalog. + +For detailed setup instructions, including setting up watsonx.data, adding the Presto(Java) engine, configuring storages, registering data sources, and managing permissions, refer to the official IBM documentation: +- watsonx.data Software Documentation: [IBM watsonx.data Software Guide](https://www.ibm.com/docs/en/watsonx/watsonxdata/2.1.x) +- watsonx.data SaaS Documentation: [IBM watsonx.data SaaS Guide](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started) + + ## Session properties @@ -45,40 +51,59 @@ hive.allow-rename-table=true ## File format configuration -For file-based connectors, such as Hive, you can customize table materialization and data formats. For example, to create a partitioned [Parquet](https://spark.apache.org/docs/latest/sql-data-sources-parquet.html) table: +File-based connectors, such as Hive and Iceberg, allow customization of table materialization, data formats, and partitioning strategies in dbt models. The following examples demonstrate how to configure these options for each connector. + + +### Hive Configuration + +Hive supports specifying file formats and partitioning strategies using the properties parameter in dbt models. The example below demonstrates how to create a partitioned table stored in Parquet format: ```sql {{ config( materialized='table', properties={ - "format": "'PARQUET'", - "partitioning": "ARRAY['bucket(id, 2)']", + "format": "'PARQUET'", -- Specifies the file format + "partitioned_by": "ARRAY['id']", -- Defines the partitioning column(s) } ) }} ``` -## Seeds and prepared statements -The `dbt-watsonx-presto` adapter offers comprehensive support for all [Presto datatypes](https://prestodb.io/docs/current/language/types.html) and [watsonx.data Presto datatypes](https://www.ibm.com/support/pages/node/7157339) in seed files. However, to utilize this feature, you need to explicitly define the data types for each column in the `dbt_project.yml` file. +For more details about Hive table creation and supported properties, refer to the [Hive connector documentation](https://prestodb.io/docs/current/connector/hive.html#create-a-managed-table). -To configure column data types, update your `/dbt_project.yml` file as follows: +### Iceberg Configuration -```sh -seeds: - : - : - +column_types: - : - : +Iceberg supports defining file formats and advanced partitioning strategies to optimize query performance. The following example demonstrates how to create a ORC table partitioned using a bucketing strategy: + +```sql +{{ + config( + materialized='table', + properties={ + "format": "'ORC'", -- Specifies the file format + "partitioning": "ARRAY['bucket(id, 2)']", -- Defines the partitioning strategy + } + ) +}} ``` -This ensures that dbt correctly interprets and applies the specified data types when loading seed data into your watsonx.data Presto instances. +For more information about Iceberg table creation and supported configurations, refer to the [Iceberg connector documentation](https://prestodb.io/docs/current/connector/iceberg.html#create-table). + + +## Seeds and prepared statements +The `dbt-watsonx-presto` adapter offers comprehensive support for all [Presto datatypes](https://prestodb.io/docs/current/language/types.html) and [watsonx.data Presto datatypes](https://www.ibm.com/support/pages/node/7157339) in seed files. To leverage this functionality, you must explicitly define the data types for each column. + +You can configure column data types either in the dbt_project.yml file or in property files, as supported by dbt. For more details on seed configuration and best practices, refer to the [dbt seed configuration documentation](https://docs.getdbt.com/reference/seed-configs). ## Materializations +The `dbt-watsonx-presto` adapter supports both table and view materializations, allowing you to manage how your data is stored and queried in watsonx.data Presto(java). + +For further information on configuring materializations, refer to the [dbt materializations documentation](https://docs.getdbt.com/reference/resource-configs/materialized). + ### Table -The `dbt-watsonx-presto` adapter helps you create and update tables through table materialization, making it easier to work with data in watsonx.data Presto. +The `dbt-watsonx-presto` adapter enables you to create and update tables through table materialization, making it easier to work with data in watsonx.data Presto. #### Recommendations - **Check Permissions:** Ensure that the necessary permissions for table creation are enabled in the catalog or schema. @@ -93,20 +118,12 @@ PrestoUserError(type=USER_ERROR, name=NOT_SUPPORTED, message="This connector doe ### View -The `dbt-watsonx-presto` adapter supports creating views using the `materialized='view'` configuration in your dbt model. By default, when you set the materialization to view, it creates a view in watsonx.data Presto. - -```sql -{{ - config( - materialized='view', - ) -}} -``` +The `dbt-watsonx-presto` adapter automatically creates views by default, as views are the standard materialization in dbt. If no materialization is explicitly specified, dbt will create a view in watsonx.data Presto. -For more details, refer to the watsonx.data [sql statement support](https://www.ibm.com/support/pages/node/7157339) or Presto [connector documentation](https://prestodb.io/docs/current/connector.html) to verify whether your connector supports view creation. +To confirm whether your connector supports view creation, refer to the watsonx.data [sql statement support](https://www.ibm.com/support/pages/node/7157339) or Presto [connector documentation](https://prestodb.io/docs/current/connector.html). -### Unsupported Features +## Unsupported Features The following features are not supported by the `dbt-watsonx-presto` adapter - Incremental Materialization - Materialized Views From eb0385f557f1d020b0f74ae28d92f8c4a671366c Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:04:16 +0530 Subject: [PATCH 09/53] Update website/docs/docs/core/connect-data-platform/teradata-setup.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/core/connect-data-platform/teradata-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 774f5b5e070..dbca24c6213 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -210,7 +210,7 @@ For using cross-DB macros, teradata-utils as a macro namespace will not be used, ##### hash `Hash` macro needs an `md5` function implementation. Teradata doesn't support `md5` natively. You need to install a User Defined Function (UDF) and optionally specify `md5_udf` [variable](/docs/build/project-variables).
- If not specified the code defaults to using `GLOBAL_FUNCTIONS.hash_md5`. See below instructions on how to install the custom UDF: + If not specified the code defaults to using `GLOBAL_FUNCTIONS.hash_md5`. See the following instructions on how to install the custom UDF: 1. Download the md5 UDF implementation from Teradata (registration required): https://downloads.teradata.com/download/extensibility/md5-message-digest-udf. 1. Unzip the package and go to `src` directory. 1. Start up `bteq` and connect to your database. From b4a158114ee8c7a244666f769a0fb2e8997a23f5 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:04:33 +0530 Subject: [PATCH 10/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 8debd1c79ae..5df679fe069 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -350,7 +350,7 @@ If a user sets some key-value pair with value as `'{model}'`, internally this `' ## Unit testing * Unit testing is supported in dbt-teradata, allowing users to write and execute unit tests using the dbt test command. - * For detailed guidance, refer to the dbt documentation. + * For detailed guidance, refer to the [dbt unit tests documentation](/docs/build/documentation). * QVCI must be enabled in the database to run unit tests for views. * Additional details on enabling QVCI can be found in the General section. From 1b6364d4922096d99f49b4c45f403d2dedaae99e Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:04:57 +0530 Subject: [PATCH 11/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 5df679fe069..650743ee3f3 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -382,7 +382,7 @@ In temporal databases, valid time is crucial for applications like historical re The `valid_history` incremental strategy requires the following parameters: * `unique_key`: The primary key of the model (excluding the valid time components), specified as a column name or list of column names. * `valid_period`: Name of the model column indicating the period for which the record is considered to be valid. The datatype must be `PERIOD(DATE)` or `PERIOD(TIMESTAMP)`. -* `use_valid_to_time`: Wether the end bound value of the valid period in the input is considered by the strategy when building the valid timeline. Use 'no' if you consider your record to be valid until changed (and supply any value greater to the begin bound for the end bound of the period - a typical convention is `9999-12-31` of ``9999-12-31 23:59:59.999999`). Use 'yes' if you know until when the record is valid (typically this is a correction in the history timeline). +* `use_valid_to_time`: Whether the end bound value of the valid period in the input is considered by the strategy when building the valid timeline. Use `no` if you consider your record to be valid until changed (and supply any value greater to the begin bound for the end bound of the period. A typical convention is `9999-12-31` of ``9999-12-31 23:59:59.999999`). Use `yes` if you know until when the record is valid (typically this is a correction in the history timeline). The valid_history strategy in dbt-teradata involves several critical steps to ensure the integrity and accuracy of historical data management: * Remove duplicates and conflicting values from the source data: From 94eb01c8b60353d6d856cc5f3c7269222a1bba64 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:05:11 +0530 Subject: [PATCH 12/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 650743ee3f3..9d0214b2dd1 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -390,7 +390,7 @@ The valid_history strategy in dbt-teradata involves several critical steps to en * The process of removing primary key duplicates (ie. two or more records with the same value for the `unique_key` and BEGIN() bond of the `valid_period` fields) in the dataset produced by the model. If such duplicates exist, the row with the lowest value is retained for all non-primary-key fields (in the order specified in the model) is retained. Full-row duplicates are always de-duplicated. * Identify and adjust overlapping time slices (if use_valid_to_time='yes): * Overlapping time periods in the data are corrected to maintain a consistent and non-overlapping timeline. To do so, the valid period end bound of a record is adjusted to meet the begin bound of the next record with the same `unique_key` value and overlapping `valid_period` value if any. -* Manage records needing to be adjusted, deleted or split based on the source and target data: +* Manage records needing to be adjusted, deleted, or split based on the source and target data: * This involves handling scenarios where records in the source data overlap with or need to replace records in the target data, ensuring that the historical timeline remains accurate. * Compact history: * Normalize and compact the history by merging records of adjacent time periods withe same value, optimizing database storage and performance. We use the function TD_NORMALIZE_MEET for this purpose. From f9a4b28b06ff59e32a95d5e94ecbbf9906cbfe0a Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:05:33 +0530 Subject: [PATCH 13/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index 9d0214b2dd1..a2e2a11faec 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -393,7 +393,7 @@ The valid_history strategy in dbt-teradata involves several critical steps to en * Manage records needing to be adjusted, deleted, or split based on the source and target data: * This involves handling scenarios where records in the source data overlap with or need to replace records in the target data, ensuring that the historical timeline remains accurate. * Compact history: - * Normalize and compact the history by merging records of adjacent time periods withe same value, optimizing database storage and performance. We use the function TD_NORMALIZE_MEET for this purpose. + * Normalize and compact the history by merging records of adjacent time periods with the same value, optimizing database storage and performance. We use the function TD_NORMALIZE_MEET for this purpose. * Delete existing overlapping records from the target table: * Before inserting new or updated records, any existing records in the target table that overlap with the new data are removed to prevent conflicts. * Insert the processed data into the target table: From accf0c9f8faf9fbdf555d767b8f624655d61f9f1 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:05:55 +0530 Subject: [PATCH 14/53] Update website/docs/reference/resource-configs/teradata-configs.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/reference/resource-configs/teradata-configs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index a2e2a11faec..bb0e16de802 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -387,7 +387,7 @@ The `valid_history` incremental strategy requires the following parameters: The valid_history strategy in dbt-teradata involves several critical steps to ensure the integrity and accuracy of historical data management: * Remove duplicates and conflicting values from the source data: * This step ensures that the data is clean and ready for further processing by eliminating any redundant or conflicting records. - * The process of removing primary key duplicates (ie. two or more records with the same value for the `unique_key` and BEGIN() bond of the `valid_period` fields) in the dataset produced by the model. If such duplicates exist, the row with the lowest value is retained for all non-primary-key fields (in the order specified in the model) is retained. Full-row duplicates are always de-duplicated. + * The process of removing primary key duplicates (two or more records with the same value for the `unique_key` and BEGIN() bond of the `valid_period` fields) in the dataset produced by the model. If such duplicates exist, the row with the lowest value is retained for all non-primary-key fields (in the order specified in the model). Full-row duplicates are always de-duplicated. * Identify and adjust overlapping time slices (if use_valid_to_time='yes): * Overlapping time periods in the data are corrected to maintain a consistent and non-overlapping timeline. To do so, the valid period end bound of a record is adjusted to meet the begin bound of the next record with the same `unique_key` value and overlapping `valid_period` value if any. * Manage records needing to be adjusted, deleted, or split based on the source and target data: From 0c29ad3b71e731eb7865d146fa404de030a642e6 Mon Sep 17 00:00:00 2001 From: Mohan Talla Date: Sun, 19 Jan 2025 07:06:22 +0530 Subject: [PATCH 15/53] Update website/docs/docs/core/connect-data-platform/teradata-setup.md Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> --- website/docs/docs/core/connect-data-platform/teradata-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index dbca24c6213..72716b5471a 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -247,7 +247,7 @@ dbt-teradata 1.8.0 and later versions support unit tests, enabling you to valida ## Limitations -### Browser Authentication +### Browser authentication When running a dbt job with logmech set to "browser", the initial authentication opens a browser window where you must enter your username and password.
After authentication, this window remains open, requiring you to manually switch back to the dbt console.
For every subsequent connection, a new browser tab briefly opens, displaying the message "TERADATA BROWSER AUTHENTICATION COMPLETED," and silently reuses the existing session.
From 9e43058f48273860dea187903fe5d21709ec0e5b Mon Sep 17 00:00:00 2001 From: "Talla, Mohan" Date: Sun, 19 Jan 2025 07:31:30 +0530 Subject: [PATCH 16/53] Added bullet points --- .../core/connect-data-platform/teradata-setup.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 72716b5471a..338fea33590 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -248,12 +248,12 @@ dbt-teradata 1.8.0 and later versions support unit tests, enabling you to valida ## Limitations ### Browser authentication -When running a dbt job with logmech set to "browser", the initial authentication opens a browser window where you must enter your username and password.
-After authentication, this window remains open, requiring you to manually switch back to the dbt console.
-For every subsequent connection, a new browser tab briefly opens, displaying the message "TERADATA BROWSER AUTHENTICATION COMPLETED," and silently reuses the existing session.
-However, the focus stays on the browser window, so you’ll need to manually switch back to the dbt console each time.
-This behavior is the default functionality of the teradatasql driver and cannot be avoided at this time.
-To prevent session expiration and the need to re-enter credentials, ensure the authentication browser window stays open until the job is complete. +* When running a dbt job with logmech set to "browser", the initial authentication opens a browser window where you must enter your username and password.
+* After authentication, this window remains open, requiring you to manually switch back to the dbt console.
+* For every subsequent connection, a new browser tab briefly opens, displaying the message "TERADATA BROWSER AUTHENTICATION COMPLETED," and silently reuses the existing session.
+* However, the focus stays on the browser window, so you’ll need to manually switch back to the dbt console each time.
+* This behavior is the default functionality of the teradatasql driver and cannot be avoided at this time.
+* To prevent session expiration and the need to re-enter credentials, ensure the authentication browser window stays open until the job is complete. ### Transaction mode Both ANSI and TERA modes are now supported in dbt-teradata. TERA mode's support is introduced with dbt-teradata 1.7.1, it is an initial implementation. From 13e36abe7fb6078042ca9e11f719bccebda54308 Mon Sep 17 00:00:00 2001 From: "Talla, Mohan" Date: Sun, 19 Jan 2025 07:45:43 +0530 Subject: [PATCH 17/53] Updated for dbt-teradata 1.9.0 --- .../connect-data-platform/teradata-setup.md | 18 ++++++----- .../resource-configs/teradata-configs.md | 32 ++----------------- 2 files changed, 13 insertions(+), 37 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/teradata-setup.md b/website/docs/docs/core/connect-data-platform/teradata-setup.md index 338fea33590..dc33c7a299a 100644 --- a/website/docs/docs/core/connect-data-platform/teradata-setup.md +++ b/website/docs/docs/core/connect-data-platform/teradata-setup.md @@ -38,17 +38,19 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; | 1.6.x | ✅ | ✅ | ✅ | ❌ | | 1.7.x | ✅ | ✅ | ✅ | ❌ | | 1.8.x | ✅ | ✅ | ✅ | ✅ | +| 1.9.x | ✅ | ✅ | ✅ | ✅ | ## dbt dependent packages version compatibility -| dbt-teradata | dbt-core | dbt-teradata-util | dbt-util | -|--------------|------------|-------------------|----------------| -| 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below | -| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 | -| 1.7.x | 1.7.x | 1.1.1 | 1.1.1 | -| 1.8.x | 1.8.x | 1.1.1 | 1.1.1 | -| 1.8.x | 1.8.x | 1.2.0 | 1.2.0 | -| 1.8.x | 1.8.x | 1.3.0 | 1.3.0 | +| dbt-teradata | dbt-core | dbt-teradata-util | dbt-util | +|--------------|----------|-------------------|----------------| +| 1.2.x | 1.2.x | 0.1.0 | 0.9.x or below | +| 1.6.7 | 1.6.7 | 1.1.1 | 1.1.1 | +| 1.7.x | 1.7.x | 1.1.1 | 1.1.1 | +| 1.8.x | 1.8.x | 1.1.1 | 1.1.1 | +| 1.8.x | 1.8.x | 1.2.0 | 1.2.0 | +| 1.8.x | 1.8.x | 1.3.0 | 1.3.0 | +| 1.9.x | 1.9.x | 1.3.0 | 1.3.0 | ### Connecting to Teradata diff --git a/website/docs/reference/resource-configs/teradata-configs.md b/website/docs/reference/resource-configs/teradata-configs.md index bb0e16de802..682d73cdbf3 100644 --- a/website/docs/reference/resource-configs/teradata-configs.md +++ b/website/docs/reference/resource-configs/teradata-configs.md @@ -12,25 +12,6 @@ id: "teradata-configs" +quote_columns: false #or `true` if you have csv column headers with spaces ``` -* *Enable view column types in docs* - Teradata Vantage has a dbscontrol configuration flag called `DisableQVCI`. This flag instructs the database to create `DBC.ColumnsJQV` with view column type definitions. To enable this functionality you need to: - 1. Enable QVCI mode in Vantage. Use `dbscontrol` utility and then restart Teradata. Run these commands as a privileged user on a Teradata node: - ```bash - # option 551 is DisableQVCI. Setting it to false enables QVCI. - dbscontrol << EOF - M internal 551=false - W - EOF - - # restart Teradata - tpareset -y Enable QVCI - ``` - 2. Instruct `dbt` to use `QVCI` mode. Include the following variable in your `dbt_project.yml`: - ```yaml - vars: - use_qvci: true - ``` - For example configuration, see [dbt_project.yml](https://github.com/Teradata/dbt-teradata/blob/main/test/catalog/with_qvci/dbt_project.yml) in `dbt-teradata` QVCI tests. - ## Models ### @@ -351,14 +332,7 @@ If a user sets some key-value pair with value as `'{model}'`, internally this `' ## Unit testing * Unit testing is supported in dbt-teradata, allowing users to write and execute unit tests using the dbt test command. * For detailed guidance, refer to the [dbt unit tests documentation](/docs/build/documentation). - -* QVCI must be enabled in the database to run unit tests for views. - * Additional details on enabling QVCI can be found in the General section. - * Without QVCI enabled, unit test support for views will be limited. - * Users might encounter the following database error when testing views without QVCI enabled: - ``` - * [Teradata Database] [Error 3706] Syntax error: Data Type "N" does not match a Defined Type name. - ``` +> In Teradata, reusing the same alias across multiple common table expressions (CTEs) or subqueries within a single model is not permitted, as it results in parsing errors; therefore, it is essential to assign unique aliases to each CTE or subquery to ensure proper query execution. ## valid_history incremental materialization strategy _This is available in early access_ @@ -388,8 +362,8 @@ The valid_history strategy in dbt-teradata involves several critical steps to en * Remove duplicates and conflicting values from the source data: * This step ensures that the data is clean and ready for further processing by eliminating any redundant or conflicting records. * The process of removing primary key duplicates (two or more records with the same value for the `unique_key` and BEGIN() bond of the `valid_period` fields) in the dataset produced by the model. If such duplicates exist, the row with the lowest value is retained for all non-primary-key fields (in the order specified in the model). Full-row duplicates are always de-duplicated. -* Identify and adjust overlapping time slices (if use_valid_to_time='yes): - * Overlapping time periods in the data are corrected to maintain a consistent and non-overlapping timeline. To do so, the valid period end bound of a record is adjusted to meet the begin bound of the next record with the same `unique_key` value and overlapping `valid_period` value if any. +* Identify and adjust overlapping time slices: + * Overlapping or adjacent time periods in the data are corrected to maintain a consistent and non-overlapping timeline. To achieve this, the macro adjusts the valid period end bound of a record to align with the begin bound of the next record (if they overlap or are adjacent) within the same `unique_key` group. If `use_valid_to_time = 'yes'`, the valid period end bound provided in the source data is used. Otherwise, a default end date is applied for missing bounds, and adjustments are made accordingly. * Manage records needing to be adjusted, deleted, or split based on the source and target data: * This involves handling scenarios where records in the source data overlap with or need to replace records in the target data, ensuring that the historical timeline remains accurate. * Compact history: From 206ced66162688cb82d6586254e2b93ff3a968b8 Mon Sep 17 00:00:00 2001 From: KNagaVivek Date: Tue, 21 Jan 2025 20:35:07 +0530 Subject: [PATCH 18/53] Remove OSS Presto content --- .../watsonx-presto-setup.md | 20 +++++-------- .../resource-configs/watsonx-presto-config.md | 30 +++++-------------- 2 files changed, 16 insertions(+), 34 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 957a22509ad..be41909a58b 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -16,7 +16,7 @@ meta: config_page: /reference/resource-configs/watsonx-presto-config --- -The dbt-watsonx-presto adapter allows you to use dbt to transform and manage data on IBM watsonx.data Presto(Java), leveraging its distributed SQL query engine capabilities. The configuration and connection setup described here are also applicable to open-source Presto. Before proceeding, ensure you have the following: +The dbt-watsonx-presto adapter allows you to use dbt to transform and manage data on IBM watsonx.data Presto(Java), leveraging its distributed SQL query engine capabilities. Before proceeding, ensure you have the following: -Refer to the Configuring dbt-watsonx-presto section for guidance on obtaining and organizing these details. +Refer to [Configuring dbt-watsonx-presto](https://www.ibm.com/docs/en/watsonx/watsonxdata/2.1.x?topic=presto-configuration-setting-up-your-profile) for guidance on obtaining and organizing these details. From a3eeec2c4aaeb1a378226503ea718292b7874371 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Thu, 23 Jan 2025 08:46:59 -0500 Subject: [PATCH 42/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md Co-authored-by: nataliefiann <120089939+nataliefiann@users.noreply.github.com> --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index cd81cfb63cf..83e4cee2323 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -34,7 +34,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; ## Connecting to IBM watsonx.data presto -To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and Software instances: +To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and software instances: From 00ef1b06c29ee04d4760e0f9b83246a1f495e5fd Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Thu, 23 Jan 2025 08:47:06 -0500 Subject: [PATCH 43/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md Co-authored-by: nataliefiann <120089939+nataliefiann@users.noreply.github.com> --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 83e4cee2323..2c30707fa3b 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -72,7 +72,7 @@ my_project: ## Host parameters -The following profile fields are required to configure watsonx.data Presto(java) connections. For IBM watsonx.data SaaS or Software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page. +The following profile fields are required to configure watsonx.data Presto(java) connections. For IBM watsonx.data SaaS or software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page. | Option | Required/Optional | Description | Example | | --------- | ------- | ------- | ----------- | From acfd48ea081315c00872e4468b6cd915e77273c1 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Thu, 23 Jan 2025 08:47:13 -0500 Subject: [PATCH 44/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md Co-authored-by: nataliefiann <120089939+nataliefiann@users.noreply.github.com> --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 2c30707fa3b..09abb051835 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -80,7 +80,7 @@ The following profile fields are required to configure watsonx.data Presto(java) | `user` | Required | Username or email address for authentication. | `user` | | `password`| Required | Password or API key for authentication | `password` | | `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | -| `database`| Required | The catalog name in your presto instance. | `Analytics` | +| `database`| Required | The catalog name in your Presto instance. | `Analytics` | | `schema` | Required | The schema name within your presto instance catalog. | `my_schema` | | `port` | Required | The port for connecting to Presto. | `443` | | `ssl_verify` | Optional (default: **true**) | Specifies the path to the SSL certificate or a boolean value. The SSL certificate path is required if the watsonx.data instance is not secure (HTTP).| `path/to/certificate` or `true` | From 7b74d0ea39181fc5c1fc1d511397d55a112060d8 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Thu, 23 Jan 2025 08:47:20 -0500 Subject: [PATCH 45/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md Co-authored-by: nataliefiann <120089939+nataliefiann@users.noreply.github.com> --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 09abb051835..d5a60498a6c 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -81,7 +81,7 @@ The following profile fields are required to configure watsonx.data Presto(java) | `password`| Required | Password or API key for authentication | `password` | | `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | | `database`| Required | The catalog name in your Presto instance. | `Analytics` | -| `schema` | Required | The schema name within your presto instance catalog. | `my_schema` | +| `schema` | Required | The schema name within your Presto instance catalog. | `my_schema` | | `port` | Required | The port for connecting to Presto. | `443` | | `ssl_verify` | Optional (default: **true**) | Specifies the path to the SSL certificate or a boolean value. The SSL certificate path is required if the watsonx.data instance is not secure (HTTP).| `path/to/certificate` or `true` | From f0c17e922f496f83f629173668e9b967a610a4a5 Mon Sep 17 00:00:00 2001 From: Amy Chen <46451573+amychen1776@users.noreply.github.com> Date: Thu, 23 Jan 2025 08:47:30 -0500 Subject: [PATCH 46/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md Co-authored-by: nataliefiann <120089939+nataliefiann@users.noreply.github.com> --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index d5a60498a6c..8eb479b039e 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -76,7 +76,7 @@ The following profile fields are required to configure watsonx.data Presto(java) | Option | Required/Optional | Description | Example | | --------- | ------- | ------- | ----------- | -| `method` | Required | Specifies the authentication method for secure connections. Use `BasicAuth` when connecting to IBM watsonx.data SaaS or Software instances. | `BasicAuth` | +| `method` | Required | Specifies the authentication method for secure connections. Use `BasicAuth` when connecting to IBM watsonx.data SaaS or software instances. | `BasicAuth` | | `user` | Required | Username or email address for authentication. | `user` | | `password`| Required | Password or API key for authentication | `password` | | `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | From ddad7be54c30d2d1110b5ea92b94a73faee31a5e Mon Sep 17 00:00:00 2001 From: KNagaVivek <79193329+KNagaVivek@users.noreply.github.com> Date: Thu, 23 Jan 2025 19:22:56 +0530 Subject: [PATCH 47/53] Update watsonx-presto-setup.md --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 8eb479b039e..7b1f7aa88de 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -25,8 +25,6 @@ The dbt-watsonx-presto adapter allows you to use dbt to transform and manage dat Refer to [Configuring dbt-watsonx-presto](https://www.ibm.com/docs/en/watsonx/watsonxdata/2.1.x?topic=presto-configuration-setting-up-your-profile) for guidance on obtaining and organizing these details. - - import SetUpPages from '/snippets/_setup-pages-intro.md'; From 0ed1a5e5bdd75dad7545a6f16065328d006c3772 Mon Sep 17 00:00:00 2001 From: nataliefiann <120089939+nataliefiann@users.noreply.github.com> Date: Thu, 23 Jan 2025 14:17:51 +0000 Subject: [PATCH 48/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 7b1f7aa88de..cf2899f6b8e 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -32,7 +32,7 @@ import SetUpPages from '/snippets/_setup-pages-intro.md'; ## Connecting to IBM watsonx.data presto -To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and software instances: +To connect dbt with watsonx.data Presto(java), you need to configure a profile in your `profiles.yml` file located in the `.dbt/` directory of your home folder. The following is an example configuration for connecting to IBM watsonx.data SaaS and Software instances: From 271b05f64a4892f8f28657ecfef2b539319ea56f Mon Sep 17 00:00:00 2001 From: nataliefiann <120089939+nataliefiann@users.noreply.github.com> Date: Thu, 23 Jan 2025 14:18:38 +0000 Subject: [PATCH 49/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index cf2899f6b8e..9da288aa338 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -74,7 +74,7 @@ The following profile fields are required to configure watsonx.data Presto(java) | Option | Required/Optional | Description | Example | | --------- | ------- | ------- | ----------- | -| `method` | Required | Specifies the authentication method for secure connections. Use `BasicAuth` when connecting to IBM watsonx.data SaaS or software instances. | `BasicAuth` | +| `method` | Required | Specifies the authentication method for secure connections. Use `BasicAuth` when connecting to IBM watsonx.data SaaS or Software instances. | `BasicAuth` | | `user` | Required | Username or email address for authentication. | `user` | | `password`| Required | Password or API key for authentication | `password` | | `host` | Required | Hostname for connecting to Presto. | `127.0.0.1` | From 9ca4aa7ff7f85eca349f9c1735587d7e0c503061 Mon Sep 17 00:00:00 2001 From: nataliefiann <120089939+nataliefiann@users.noreply.github.com> Date: Thu, 23 Jan 2025 14:19:29 +0000 Subject: [PATCH 50/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 9da288aa338..3db54d82bc9 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -18,7 +18,7 @@ meta: The dbt-watsonx-presto adapter allows you to use dbt to transform and manage data on IBM watsonx.data Presto(Java), leveraging its distributed SQL query engine capabilities. Before proceeding, ensure you have the following:
    -
  • An active IBM watsonx.data Presto(Java) engine with connection details (host, port, catalog, schema) in SaaS/software.
  • +
  • An active IBM watsonx.data Presto(Java) engine with connection details (host, port, catalog, schema) in SaaS/Software.
  • Authentication credentials: Username and password/apikey.
  • For watsonx.data instances, SSL verification is required for secure connections. If the instance host uses HTTPS, there is no need to specify the SSL certificate parameter. However, if the instance host uses an unsecured HTTP connection, ensure you provide the path to the SSL certificate file.
From 219720afe81a9510fdb1f4fedee8bca1a393b14c Mon Sep 17 00:00:00 2001 From: nataliefiann <120089939+nataliefiann@users.noreply.github.com> Date: Thu, 23 Jan 2025 14:19:55 +0000 Subject: [PATCH 51/53] Update website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md --- .../docs/core/connect-data-platform/watsonx-presto-setup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md index 3db54d82bc9..880328929f6 100644 --- a/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md +++ b/website/docs/docs/core/connect-data-platform/watsonx-presto-setup.md @@ -70,7 +70,7 @@ my_project: ## Host parameters -The following profile fields are required to configure watsonx.data Presto(java) connections. For IBM watsonx.data SaaS or software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page. +The following profile fields are required to configure watsonx.data Presto(java) connections. For IBM watsonx.data SaaS or Software instances, you can get the `hostname` and `port` details by clicking **View connect details** on the Presto(java) engine details page. | Option | Required/Optional | Description | Example | | --------- | ------- | ------- | ----------- | From bf41f7a28e7410389ae14fb06432951dddcea401 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 23 Jan 2025 16:17:07 +0000 Subject: [PATCH 52/53] Update metricflow-commands.md updating this as --csv flag isn't supported in dbt cloud --- website/docs/docs/build/metricflow-commands.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index 9a35939a8b4..60a310aace2 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -536,14 +536,15 @@ limit 10 - -Add the `--csv file_name.csv` flag to export the results of your query to a csv. + +Add the `--csv file_name.csv` flag to export the results of your query to a csv. The `--csv` flag is available in dbt Core only and not supported in dbt Cloud. **Query** ```bash -# In dbt Cloud + # In dbt Core mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv From 6f6b242e809941ec2cde1e4f14784a14c733ef76 Mon Sep 17 00:00:00 2001 From: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Date: Thu, 23 Jan 2025 16:26:33 +0000 Subject: [PATCH 53/53] Update metricflow-commands.md remove commented out bits bc its appearing in live site incorrectly: --- website/docs/docs/build/metricflow-commands.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/website/docs/docs/build/metricflow-commands.md b/website/docs/docs/build/metricflow-commands.md index 60a310aace2..f8921c77a24 100644 --- a/website/docs/docs/build/metricflow-commands.md +++ b/website/docs/docs/build/metricflow-commands.md @@ -542,9 +542,6 @@ Add the `--csv file_name.csv` flag to export the results of your query to a csv. **Query** ```bash - # In dbt Core mf query --metrics order_total --group-by metric_time,is_food_order --limit 10 --order-by -metric_time --where "is_food_order = True" --start-time '2017-08-22' --end-time '2017-08-27' --csv query_example.csv