Skip to content

Commit fe0406b

Browse files
committed
Move SQL reference to new Data Cloud SQL reference
1 parent 1b3e5d7 commit fe0406b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

62 files changed

+80
-7927
lines changed

website/docs/guides/hyper_file/create_update.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,8 @@ The script consist of 3 high-level steps:
4242

4343
1. Start a Hyper process. The [`HyperProcess`](../../hyper-api/hyper_process)
4444
2. Create a connection to the `.hyper` file. Since we create the [`Connection`](../../hyper-api/connection) class with the `CreateMode.CREATE_AND_REPLACE`, the `.hyper` file will be automatically created if it does not exist yet, and will be overwritten if a file with that name already exists.
45-
3. Defining the table. In this case, we are using the Python utilities `TableDefinition` and `catalog.create_table`. We could have also used a [CREATE TABLE](../../sql/command/create_table) SQL command.
46-
4. Insert the data. In the example, we use the `Inserter` utility to provide the data from Python. You can also use [INSERT](../../sql/command/insert) or [COPY](../../sql/command/copy_from) statements or any other means to load data into the table. E.g., you can thereby directly load your table from a CSV file.
45+
3. Defining the table. In this case, we are using the Python utilities `TableDefinition` and `catalog.create_table`. We could have also used a [CREATE TABLE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/create-table.html) SQL command.
46+
4. Insert the data. In the example, we use the `Inserter` utility to provide the data from Python. You can also use [INSERT](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/insert.html) or [COPY](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html) statements or any other means to load data into the table. E.g., you can thereby directly load your table from a CSV file.
4747

4848
#### File Format Versions
4949

@@ -64,7 +64,7 @@ The main difference when connecting is that you use `CreateMode.NONE` instead of
6464
By using `CreateMode.NONE`, Hyper will connect to a pre-existing file instead of recreating a new, empty file.
6565
Since the default for `CreateMode` is `NONE`, you can also just leave this parameter out completely.
6666

67-
You can then use SQL commands ([INSERT](../../sql/command/insert), [UPDATE](../../sql/command/update), [DELETE](../../sql/command/delete), [COPY](../../sql/command/copy_from), ...) or the `Inserter` utility class to change the data in the table.
67+
You can then use SQL commands ([INSERT](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/insert.html), [UPDATE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/update.html), [DELETE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/delete.html), [COPY](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html) or the `Inserter` utility class to change the data in the table.
6868
You could also create new tables or drop existing tables.
6969

7070
The following example removes rows with a `value < 50` and appends two new row to an existing table within an extract file:

website/docs/guides/hyper_file/geodata.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ with Inserter(connection, geo_table, column_mappings, inserter_definition = inse
6262
inserter.execute()
6363
```
6464

65-
Note if you have WKT data in a comma-separated value (CSV) file, you can use the [COPY](/docs/sql/command/copy_from) command to insert the data from a CSV file. The command automatically converts the WKT strings to the `tableau.tabgeography` data type. For more information, see the [Example code using copy from CSV](#example-code-using-copy-from-csv) and the Help topic [Insert Data Directly from CSV Files](./insert_csv) and the CSV sample on GitHub, [hyper-api-samples](https://github.com/tableau/hyper-api-samples).
65+
Note if you have WKT data in a comma-separated value (CSV) file, you can use the [COPY](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html) command to insert the data from a CSV file. The command automatically converts the WKT strings to the `tableau.tabgeography` data type. For more information, see the [Example code using copy from CSV](#example-code-using-copy-from-csv) and the Help topic [Insert Data Directly from CSV Files](./insert_csv) and the CSV sample on GitHub, [hyper-api-samples](https://github.com/tableau/hyper-api-samples).
6666

6767
## Example code using the Inserter
6868

website/docs/guides/hyper_file/insert_csv.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Insert Data Directly from CSV Files
22

3-
Comma-separated values (CSV) are a popular file format to import and export tabular data from programs. Hyper is able to directly load data into a Hyper table. Using the PostgreSQL-like [COPY FROM](/docs/sql/command/copy_from) command, you can copy the data much faster than you could by iteratively adding the data one row at a time.
3+
Comma-separated values (CSV) are a popular file format to import and export tabular data from programs. Hyper is able to directly load data into a Hyper table. Using the [COPY FROM](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html) command, you can copy the data much faster than you could by iteratively adding the data one row at a time.
44

55
```python
66
from pathlib import Path
@@ -55,8 +55,8 @@ with HyperProcess(telemetry=Telemetry.SEND_USAGE_DATA_TO_TABLEAU) as hyper:
5555

5656
3. Issue the `COPY FROM` command.
5757

58-
The [COPY FROM](../../sql/command/copy_from) command instructs Hyper to
59-
directly insert data from an external file into a table.
58+
The [COPY FROM](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html)
59+
command instructs Hyper to directly insert data from an external file into a table.
6060
The `COPY` command's `WITH` clause specifies additional details about the file format: In this case, the CSV file uses a comma as the delimiter and has a header row.
6161

6262
To construct the SQL command and correctly escape the file path, we use `escape_string_literal`.

website/docs/guides/hyper_file/read.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Read Data from Hyper Files
22

33
To read data from a `.hyper` file, open a `Connection` to the file and
4-
then use the [SELECT](../../sql/command/select.md) command to retrieve data from the file.
4+
then use the [SELECT](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/select.html) command to retrieve data from the file.
55

66
```python
77
from tableauhyperapi import HyperProcess, Connection, Telemetry, CreateMode, Inserter
@@ -26,5 +26,5 @@ with HyperProcess(Telemetry.SEND_USAGE_DATA_TO_TABLEAU) as hyper:
2626
```
2727

2828
In general, you can send arbitrarily complex queries against the data in a Hyper file.
29-
More information on the available SQL commands can be found in the [SQL reference](../../sql/).
29+
More information on the available SQL commands can be found in the [SQL reference](../../sql).
3030
For more information on how to issue SQL queries from Python (or your preferred language) and how to make programatically craft SQL queries, see the [Executing SQL Commands](../sql_commands) guide.

website/docs/guides/pandas_integration.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ pantab.frame_to_hyper(animals_df, "animals.hyper", table="animals")
4242
To kickstart your creativity on potential use cases, let's use Hyper to run an analytical query on a Parquet file - and read the result back to a pandas frame using `pantab`.
4343

4444
You can send SQL queries to Hyper and get the result as a pandas dataframe using `pantab.frame_from_hyper_query`.
45-
Combined with Hyper's capabilities to [query external file formats](../sql/external/), you can use this, e.g., to directly run queries on your Parquet files, Iceberg tables or Parquet files.
45+
Combined with Hyper's capabilities to [query external file formats](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/external-files.html), you can use this, e.g., to directly run queries on your Parquet files, Iceberg tables or Parquet files.
4646
The following example demonstrates this on the Parquet file `orders_10rows.parquet` which you can [download here](https://github.com/tableau/hyper-api-samples/raw/main/Community-Supported/parquet-to-hyper/orders_10rows.parquet).
4747

4848
```python

website/docs/guides/sql_commands.md

+7-5
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,11 @@ There is one method for SQL commands, and three methods for queries which differ
4747
`execute_scalar_query` | The value from one row, one column. |
4848

4949
`execute_command` is meant to be used for SQL commands like
50-
[CREATE TABLE](../sql/command/create_table), [COPY FROM](../sql/command/copy_from),
51-
[INSERT](../sql/command/insert), [UPDATE](../sql/command/update),
52-
[DELETE](../sql/command/delete) etc., all of which don't produce any
50+
[CREATE TABLE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/create-table.html),
51+
[COPY FROM](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/copy-from.html),
52+
[INSERT](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/insert.html),
53+
[UPDATE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/update.html),
54+
[DELETE](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/delete.html) etc., all of which don't produce any
5355
result tuples but instead are executed because we are interested in their
5456
side effects. `execute_command` returns the number of rows affected by the
5557
command.
@@ -76,7 +78,7 @@ that single result value.
7678

7779
Using Hyper SQL you can, e.g., insert, update, and delete data from tables, import data from
7880
Parquet files or pose arbitrarily complex analytical queries.
79-
For a reference documentation of the supported commands, see [Hyper SQL commands](/docs/sql/command/).
81+
For a reference documentation of the supported commands, see our [SQL reference](/docs/sql).
8082

8183
Because the SQL statements are passed to the Hyper API as strings, you need to ensure that
8284
identifiers and string values are properly encoded.
@@ -123,7 +125,7 @@ INSERT INTO "Guest Names" VALUES('Francisco Eduardo')
123125

124126
The table name must be in double quotes and the string constant in single quotes.
125127

126-
Escaping for identifiers and strings is documented in [General Syntax](../sql/syntax.md).
128+
Escaping for identifiers and strings is documented in [General Syntax](https://developer.salesforce.com/docs/data/data-cloud-query-guide/references/dc-sql-reference/syntax.html).
127129
Instead of reimplementing those escaping rules by yourself, you can use the `escape_name`
128130
and `escape_string_literal` functions to correctly format identifiers and strings in
129131
your SQL statements.

0 commit comments

Comments
 (0)