Skip to content

Commit 3a78647

Browse files
authored
Update 01-ddl-create_task.md
1 parent 1127ecd commit 3a78647

File tree

1 file changed

+46
-8
lines changed

1 file changed

+46
-8
lines changed

docs/en/sql-reference/10-sql-commands/00-ddl/04-task/01-ddl-create_task.md

Lines changed: 46 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ AS
3838
| WHEN boolean_expr | A condition that must be true for the task to run. |
3939
| [ERROR_INTEGRATION](../16-notification/index.md) | Optional. The name of the notification integration to use for the task error notification with specific [task error payload ](./10-task-error-integration-payload.md)applied |
4040
| COMMENT | Optional. A string literal that serves as a comment or description for the task. |
41-
| session_parameter | Optional. Specifies session parameters to use for the task during task run. |
41+
| session_parameter | Optional. Specifies session parameters to use for the task during task run. Note that session parameters must be placed after all other task parameters in the CREATE TASK statement. |
4242
| sql | The SQL statement that the task will execute, it could be a single statement or a script This is a mandatory field. |
4343

4444
### Usage Notes
@@ -61,6 +61,12 @@ AS
6161
- Multiple tasks that consume change data from a single table stream retrieve different deltas. When a task consumes the change data in a stream using a DML statement, the stream advances the offset. The change data is no longer available for the next task to consume. Currently, we recommend that only a single task consumes the change data from a stream. Multiple streams can be created for the same table and consumed by different tasks.
6262
- Tasks will not retry on each execution; each execution is serial. Each script SQL is executed one by one, with no parallel execution. This ensures that the sequence and dependencies of task execution are maintained.
6363
- Interval-based tasks follow a fixed interval spot in a tight way. This means that if the current task execution time exceeds the interval unit, the next task will execute immediately. Otherwise, the next task will wait until the next interval unit is triggered. For example, if a task is defined with a 1-second interval and one task execution takes 1.5 seconds, the next task will execute immediately. If one task execution takes 0.5 seconds, the next task will wait until the next 1-second interval tick starts.
64+
- While session parameters can be specified during task creation, you can also modify them later using the ALTER TASK statement. For example:
65+
```sql
66+
ALTER TASK simple_task SET
67+
enable_query_result_cache = 1,
68+
query_result_cache_min_execute_secs = 5;
69+
```
6470

6571
### Important Notes on Cron Expressions
6672

@@ -100,6 +106,8 @@ AS
100106

101107
## Usage Examples
102108

109+
### CRON Schedule
110+
103111
```sql
104112
CREATE TASK my_daily_task
105113
WAREHOUSE = 'compute_wh'
@@ -109,7 +117,9 @@ CREATE TASK my_daily_task
109117
INSERT INTO summary_table SELECT * FROM source_table;
110118
```
111119

112-
In this example, a task named my_daily_task is created. It uses the compute_wh warehouse to run a SQL statement that inserts data into summary_table from source_table. The task is scheduled to run daily at 9 AM Pacific Time.
120+
In this example, a task named `my_daily_task` is created. It uses the **compute_wh** warehouse to run a SQL statement that inserts data into summary_table from source_table. The task is scheduled to run using a **CRON expression** that executes **daily at 9 AM Pacific Time**.
121+
122+
### Automatic Suspension
113123

114124
```sql
115125
CREATE TASK IF NOT EXISTS mytask
@@ -120,17 +130,23 @@ AS
120130
INSERT INTO compaction_test.test VALUES((1));
121131
```
122132

123-
This example creates a task named mytask, if it doesn't already exist. The task is assigned to the system warehouse and is scheduled to run every 2 minutes. It will be suspended if it fails three times consecutively. The task performs an INSERT operation into the compaction_test.test table.
133+
This example creates a task named `mytask`, if it doesn't already exist. The task is assigned to the **system** warehouse and is scheduled to run **every 2 minutes**. It will be **automatically suspended** if it **fails three times consecutively**. The task performs an INSERT operation into the compaction_test.test table.
134+
135+
### Second-Level Scheduling
124136

125137
```sql
126138
CREATE TASK IF NOT EXISTS daily_sales_summary
127139
WAREHOUSE = 'analytics'
128140
SCHEDULE = 30 SECOND
141+
AS
142+
SELECT sales_date, SUM(amount) AS daily_total
129143
FROM sales_data
130144
GROUP BY sales_date;
131145
```
132146

133-
In this example, a task named daily_sales_summary is created with a second-level scheduling. It is scheduled to run every 30 SECOND. The task uses the 'analytics' warehouse and calculates the daily sales summary by aggregating data from the sales_data table.
147+
In this example, a task named `daily_sales_summary` is created with **second-level scheduling**. It is scheduled to run **every 30 SECOND**. The task uses the **analytics** warehouse and calculates the daily sales summary by aggregating data from the sales_data table.
148+
149+
### Task Dependencies
134150

135151
```sql
136152
CREATE TASK IF NOT EXISTS process_orders
@@ -140,20 +156,24 @@ ASINSERT INTO data_warehouse.orders
140156
SELECT * FROM staging.orders;
141157
```
142158

143-
In this example, a task named process_orders is created, and it is defined to run after the successful completion of task1 and task2. This is useful for creating dependencies in a Directed Acyclic Graph (DAG) of tasks. The task uses the 'etl' warehouse and transfers data from the staging area to the data warehouse.
159+
In this example, a task named `process_orders` is created, and it is defined to run **after the successful completion** of **task1** and **task2**. This is useful for creating **dependencies** in a **Directed Acyclic Graph (DAG)** of tasks. The task uses the **etl** warehouse and transfers data from the staging area to the data warehouse.
160+
161+
### Conditional Execution
144162

145163
```sql
146164
CREATE TASK IF NOT EXISTS hourly_data_cleanup
147165
WAREHOUSE = 'maintenance'
148166
SCHEDULE = '0 0 * * * *'
149-
WHEN STREAM_STATUS('change_stream') = TRUE
167+
WHEN STREAM_STATUS('db1.change_stream') = TRUE
150168
AS
151169
DELETE FROM archived_data
152170
WHERE archived_date < DATEADD(HOUR, -24, CURRENT_TIMESTAMP());
153171

154172
```
155173

156-
In this example, a task named hourly_data_cleanup is created. It uses the maintenance warehouse and is scheduled to run every hour. The task deletes data from the archived_data table that is older than 24 hours. The task only runs if the change_stream stream contains change data.
174+
In this example, a task named `hourly_data_cleanup` is created. It uses the **maintenance** warehouse and is scheduled to run **every hour**. The task deletes data from the archived_data table that is older than 24 hours. The task only runs **if the condition is met** using the **STREAM_STATUS** function to check if the `db1.change_stream` contains change data.
175+
176+
### Error Integration
157177

158178
```sql
159179
CREATE TASK IF NOT EXISTS mytask
@@ -169,4 +189,22 @@ BEGIN
169189
END;
170190
```
171191

172-
In this example, a task named mytask is created. It uses the mywh warehouse and is scheduled to run every 30 seconds. The task executes a BEGIN block that contains an INSERT statement and a DELETE statement. The task commits the transaction after both statements are executed. when the task fails, it will trigger the error integration named myerror.
192+
In this example, a task named `mytask` is created. It uses the **mywh** warehouse and is scheduled to run **every 30 seconds**. The task executes a **BEGIN block** that contains an INSERT statement and a DELETE statement. The task commits the transaction after both statements are executed. When the task fails, it will trigger the **error integration** named **myerror**.
193+
194+
### Session Parameters
195+
196+
```sql
197+
CREATE TASK IF NOT EXISTS cache_enabled_task
198+
WAREHOUSE = 'analytics'
199+
SCHEDULE = 5 MINUTE
200+
COMMENT = 'Task with query result cache enabled'
201+
enable_query_result_cache = 1,
202+
query_result_cache_min_execute_secs = 5
203+
AS
204+
SELECT SUM(amount) AS total_sales
205+
FROM sales_data
206+
WHERE transaction_date >= DATEADD(DAY, -7, CURRENT_DATE())
207+
GROUP BY product_category;
208+
```
209+
210+
In this example, a task named `cache_enabled_task` is created with **session parameters** that enable query result caching. The task is scheduled to run **every 5 minutes** and uses the **analytics** warehouse. The session parameters **`enable_query_result_cache = 1`** and **`query_result_cache_min_execute_secs = 5`** are specified **after all other task parameters**, enabling the query result cache for queries that take at least 5 seconds to execute. This can **improve performance** for subsequent executions of the same task if the underlying data hasn't changed.

0 commit comments

Comments
 (0)