[Apache ORC](https://orc.apache.org) is a columnar format which has more advanced features like native zstd compression, bloom filter and columnar encryption.
### ORC Implementation
Spark supports two ORC implementations (`native` and `hive`) which is controlled by `spark.sql.orc.impl`.
Two implementations share most functionalities with different design goals.
-`native` implementation is designed to follow Spark's data source behavior like `Parquet`.
-`hive` implementation is designed to follow Hive's behavior and uses Hive SerDe.
For example, historically, `native` implementation handles `CHAR/VARCHAR` with Spark's native `String` while `hive` implementation handles it via Hive `CHAR/VARCHAR`. The query results are different. Since Spark 3.1.0, [SPARK-33480](https://issues.apache.org/jira/browse/SPARK-33480) removes this difference by supporting `CHAR/VARCHAR` from Spark-side.
### Vectorized Reader
`native` implementation supports a vectorized ORC reader and has been the default ORC implementaion since Spark 2.3.
The vectorized reader is used for the native ORC tables (e.g., the ones created using the clause `USING ORC`) when `spark.sql.orc.impl` is set to `native` and `spark.sql.orc.enableVectorizedReader` is set to `true`.
For the Hive ORC serde tables (e.g., the ones created using the clause `USING HIVE OPTIONS (fileFormat 'ORC')`),
the vectorized reader is used when `spark.sql.hive.convertMetastoreOrc` is also set to `true`, and is turned on by default.
### Schema Merging
Like Protocol Buffer, Avro, and Thrift, ORC also supports schema evolution. Users can start with
a simple schema, and gradually add more columns to the schema as needed. In this way, users may end
up with multiple ORC files with different but mutually compatible schemas. The ORC data
source is now able to automatically detect this case and merge schemas of all these files.
Since schema merging is a relatively expensive operation, and is not a necessity in most cases, we
turned it off by default . You may enable it by
1. setting data source option `mergeSchema` to `true` when reading ORC files, or
2. setting the global SQL option `spark.sql.orc.mergeSchema` to `true`.
### Zstandard
Spark supports both Hadoop 2 and 3. Since Spark 3.2, you can take advantage
of Zstandard compression in ORC files on both Hadoop versions.
Please see [Zstandard](https://facebook.github.io/zstd/) for the benefits.
<divclass="codetabs">
<divdata-lang="SQL"markdown="1">
{% highlight sql %}
CREATE TABLE compressed (
key STRING,
value STRING
)
USING ORC
OPTIONS (
compression 'zstd'
)
{% endhighlight %}
</div>
</div>
### Bloom Filters
You can control bloom filters and dictionary encodings for ORC data sources. The following ORC example will create bloom filter and use dictionary encoding only for `favorite_color`. To find more detailed information about the extra ORC options, visit the official Apache ORC websites.
<divclass="codetabs">
<divdata-lang="SQL"markdown="1">
{% highlight sql %}
CREATE TABLE users_with_options (
name STRING,
favorite_color STRING,
favorite_numbers array<integer>
)
USING ORC
OPTIONS (
orc.bloom.filter.columns 'favorite_color',
orc.dictionary.key.threshold '1.0',
orc.column.encoding.direct 'name'
)
{% endhighlight %}
</div>
</div>
### Columnar Encryption
Since Spark 3.2, columnar encryption is supported for ORC tables with Apache ORC 1.6.
The following example is using Hadoop KMS as a key provider with the given location.
Please visit [Apache Hadoop KMS](https://hadoop.apache.org/docs/current/hadoop-kms/index.html) for the detail.
When reading from Hive metastore ORC tables and inserting to Hive metastore ORC tables, Spark SQL will try to use its own ORC support instead of Hive SerDe for better performance. For CTAS statement, only non-partitioned Hive metastore ORC tables are converted. This behavior is controlled by the `spark.sql.hive.convertMetastoreOrc` configuration, and is turned on by default.