@@ -76,9 +76,12 @@ Rows selected via an Optic query can be exported to any of the below file format
76
76
77
77
The ` export-avro-files ` command writes one or more Avro files to the directory specified by the ` --path ` option. This
78
78
command reuses Spark's support for writing Avro files. You can include any of the
79
- [ Spark Avro options] ( https://spark.apache.org/docs/latest/sql-data-sources-avro.html ) via the ` -P ` option to
79
+ [ Spark Avro data source options] ( https://spark.apache.org/docs/latest/sql-data-sources-avro.html ) via the ` -P ` option to
80
80
control how Avro content is written. These options are expressed as ` -PoptionName=optionValue ` .
81
81
82
+ For configuration options listed in the above Spark Avro guide, use the ` -C ` option instead. For example,
83
+ ` -Cspark.sql.avro.compression.codec=deflate ` would change the type of compression used for writing Avro files.
84
+
82
85
### Delimited text
83
86
84
87
The ` export-delimited-files ` command writes one or more delimited text (commonly CSV) files to the directory
@@ -125,16 +128,22 @@ By default, each file will be written using the UTF-8 encoding. You can specify
125
128
126
129
The ` export-orc-files ` command writes one or more ORC files to the directory specified by the ` --path ` option. This
127
130
command reuses Spark's support for writing ORC files. You can include any of the
128
- [ Spark ORC options] ( https://spark.apache.org/docs/latest/sql-data-sources-orc.html ) via the ` -P ` option to
131
+ [ Spark ORC data source options] ( https://spark.apache.org/docs/latest/sql-data-sources-orc.html ) via the ` -P ` option to
129
132
control how ORC content is written. These options are expressed as ` -PoptionName=optionValue ` .
130
133
134
+ For configuration options listed in the above Spark ORC guide, use the ` -C ` option instead. For example,
135
+ ` -Cspark.sql.orc.impl=hive ` would change the type of ORC implementation.
136
+
131
137
### Parquet
132
138
133
139
The ` export-parquet-files ` command writes one or more Parquet files to the directory specified by the ` --path ` option. This
134
140
command reuses Spark's support for writing Parquet files. You can include any of the
135
- [ Spark Parquet options] ( https://spark.apache.org/docs/latest/sql-data-sources-parquet.html ) via the ` -P ` option to
141
+ [ Spark Parquet data source options] ( https://spark.apache.org/docs/latest/sql-data-sources-parquet.html ) via the ` -P ` option to
136
142
control how Parquet content is written. These options are expressed as ` -PoptionName=optionValue ` .
137
143
144
+ For configuration options listed in the above Spark Parquet guide, use the ` -C ` option instead. For example,
145
+ ` -Cspark.sql.parquet.compression.codec=gzip ` would change the compressed used for writing Parquet files.
146
+
138
147
## Controlling the save mode
139
148
140
149
Each of the commands for exporting rows to files supports a ` --mode ` option that controls how data is written to a
0 commit comments