You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/sql-reference/10-sql-commands/00-ddl/04-task/01-ddl-create_task.md
+46-8Lines changed: 46 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ AS
38
38
| WHEN boolean_expr | A condition that must be true for the task to run. |
39
39
|[ERROR_INTEGRATION](../16-notification/index.md)| Optional. The name of the notification integration to use for the task error notification with specific [task error payload ](./10-task-error-integration-payload.md)applied |
40
40
| COMMENT | Optional. A string literal that serves as a comment or description for the task. |
41
-
| session_parameter | Optional. Specifies session parameters to use for the task during task run. |
41
+
| session_parameter | Optional. Specifies session parameters to use for the task during task run. Note that session parameters must be placed after all other task parameters in the CREATE TASK statement.|
42
42
| sql | The SQL statement that the task will execute, it could be a single statement or a script This is a mandatory field. |
43
43
44
44
### Usage Notes
@@ -61,6 +61,12 @@ AS
61
61
- Multiple tasks that consume change data from a single table stream retrieve different deltas. When a task consumes the change data in a stream using a DML statement, the stream advances the offset. The change data is no longer available for the next task to consume. Currently, we recommend that only a single task consumes the change data from a stream. Multiple streams can be created for the same table and consumed by different tasks.
62
62
- Tasks will not retry on each execution; each execution is serial. Each script SQL is executed one by one, with no parallel execution. This ensures that the sequence and dependencies of task execution are maintained.
63
63
- Interval-based tasks follow a fixed interval spot in a tight way. This means that if the current task execution time exceeds the interval unit, the next task will execute immediately. Otherwise, the next task will wait until the next interval unit is triggered. For example, if a task is defined with a 1-second interval and one task execution takes 1.5 seconds, the next task will execute immediately. If one task execution takes 0.5 seconds, the next task will wait until the next 1-second interval tick starts.
64
+
- While session parameters can be specified during task creation, you can also modify them later using the ALTER TASK statement. For example:
65
+
```sql
66
+
ALTER TASK simple_task SET
67
+
enable_query_result_cache =1,
68
+
query_result_cache_min_execute_secs =5;
69
+
```
64
70
65
71
### Important Notes on Cron Expressions
66
72
@@ -100,6 +106,8 @@ AS
100
106
101
107
## Usage Examples
102
108
109
+
### CRON Schedule
110
+
103
111
```sql
104
112
CREATE TASK my_daily_task
105
113
WAREHOUSE ='compute_wh'
@@ -109,7 +117,9 @@ CREATE TASK my_daily_task
109
117
INSERT INTO summary_table SELECT*FROM source_table;
110
118
```
111
119
112
-
In this example, a task named my_daily_task is created. It uses the compute_wh warehouse to run a SQL statement that inserts data into summary_table from source_table. The task is scheduled to run daily at 9 AM Pacific Time.
120
+
In this example, a task named `my_daily_task` is created. It uses the **compute_wh** warehouse to run a SQL statement that inserts data into summary_table from source_table. The task is scheduled to run using a **CRON expression** that executes **daily at 9 AM Pacific Time**.
121
+
122
+
### Automatic Suspension
113
123
114
124
```sql
115
125
CREATE TASK IF NOT EXISTS mytask
@@ -120,17 +130,23 @@ AS
120
130
INSERT INTOcompaction_test.testVALUES((1));
121
131
```
122
132
123
-
This example creates a task named mytask, if it doesn't already exist. The task is assigned to the system warehouse and is scheduled to run every 2 minutes. It will be suspended if it fails three times consecutively. The task performs an INSERT operation into the compaction_test.test table.
133
+
This example creates a task named `mytask`, if it doesn't already exist. The task is assigned to the **system** warehouse and is scheduled to run **every 2 minutes**. It will be **automatically suspended** if it **fails three times consecutively**. The task performs an INSERT operation into the compaction_test.test table.
134
+
135
+
### Second-Level Scheduling
124
136
125
137
```sql
126
138
CREATE TASK IF NOT EXISTS daily_sales_summary
127
139
WAREHOUSE ='analytics'
128
140
SCHEDULE =30 SECOND
141
+
AS
142
+
SELECT sales_date, SUM(amount) AS daily_total
129
143
FROM sales_data
130
144
GROUP BY sales_date;
131
145
```
132
146
133
-
In this example, a task named daily_sales_summary is created with a second-level scheduling. It is scheduled to run every 30 SECOND. The task uses the 'analytics' warehouse and calculates the daily sales summary by aggregating data from the sales_data table.
147
+
In this example, a task named `daily_sales_summary` is created with **second-level scheduling**. It is scheduled to run **every 30 SECOND**. The task uses the **analytics** warehouse and calculates the daily sales summary by aggregating data from the sales_data table.
148
+
149
+
### Task Dependencies
134
150
135
151
```sql
136
152
CREATE TASK IF NOT EXISTS process_orders
@@ -140,20 +156,24 @@ ASINSERT INTO data_warehouse.orders
140
156
SELECT*FROMstaging.orders;
141
157
```
142
158
143
-
In this example, a task named process_orders is created, and it is defined to run after the successful completion of task1 and task2. This is useful for creating dependencies in a Directed Acyclic Graph (DAG) of tasks. The task uses the 'etl' warehouse and transfers data from the staging area to the data warehouse.
159
+
In this example, a task named `process_orders` is created, and it is defined to run **after the successful completion** of **task1** and **task2**. This is useful for creating **dependencies** in a **Directed Acyclic Graph (DAG)** of tasks. The task uses the **etl** warehouse and transfers data from the staging area to the data warehouse.
160
+
161
+
### Conditional Execution
144
162
145
163
```sql
146
164
CREATE TASK IF NOT EXISTS hourly_data_cleanup
147
165
WAREHOUSE ='maintenance'
148
166
SCHEDULE ='0 0 * * * *'
149
-
WHEN STREAM_STATUS('change_stream') = TRUE
167
+
WHEN STREAM_STATUS('db1.change_stream') = TRUE
150
168
AS
151
169
DELETEFROM archived_data
152
170
WHERE archived_date < DATEADD(HOUR, -24, CURRENT_TIMESTAMP());
153
171
154
172
```
155
173
156
-
In this example, a task named hourly_data_cleanup is created. It uses the maintenance warehouse and is scheduled to run every hour. The task deletes data from the archived_data table that is older than 24 hours. The task only runs if the change_stream stream contains change data.
174
+
In this example, a task named `hourly_data_cleanup` is created. It uses the **maintenance** warehouse and is scheduled to run **every hour**. The task deletes data from the archived_data table that is older than 24 hours. The task only runs **if the condition is met** using the **STREAM_STATUS** function to check if the `db1.change_stream` contains change data.
175
+
176
+
### Error Integration
157
177
158
178
```sql
159
179
CREATE TASK IF NOT EXISTS mytask
@@ -169,4 +189,22 @@ BEGIN
169
189
END;
170
190
```
171
191
172
-
In this example, a task named mytask is created. It uses the mywh warehouse and is scheduled to run every 30 seconds. The task executes a BEGIN block that contains an INSERT statement and a DELETE statement. The task commits the transaction after both statements are executed. when the task fails, it will trigger the error integration named myerror.
192
+
In this example, a task named `mytask` is created. It uses the **mywh** warehouse and is scheduled to run **every 30 seconds**. The task executes a **BEGIN block** that contains an INSERT statement and a DELETE statement. The task commits the transaction after both statements are executed. When the task fails, it will trigger the **error integration** named **myerror**.
193
+
194
+
### Session Parameters
195
+
196
+
```sql
197
+
CREATE TASK IF NOT EXISTS cache_enabled_task
198
+
WAREHOUSE ='analytics'
199
+
SCHEDULE =5 MINUTE
200
+
COMMENT ='Task with query result cache enabled'
201
+
enable_query_result_cache =1,
202
+
query_result_cache_min_execute_secs =5
203
+
AS
204
+
SELECTSUM(amount) AS total_sales
205
+
FROM sales_data
206
+
WHERE transaction_date >= DATEADD(DAY, -7, CURRENT_DATE())
207
+
GROUP BY product_category;
208
+
```
209
+
210
+
In this example, a task named `cache_enabled_task` is created with **session parameters** that enable query result caching. The task is scheduled to run **every 5 minutes** and uses the **analytics** warehouse. The session parameters **`enable_query_result_cache = 1`** and **`query_result_cache_min_execute_secs = 5`** are specified **after all other task parameters**, enabling the query result cache for queries that take at least 5 seconds to execute. This can **improve performance** for subsequent executions of the same task if the underlying data hasn't changed.
0 commit comments