spark-instrumented-optimizer/sql
Eric Liang dd6d55d5de [SPARK-20398][SQL] range() operator should include cancellation reason when killed
## What changes were proposed in this pull request?

https://issues.apache.org/jira/browse/SPARK-19820 adds a reason field for why tasks were killed. However, for backwards compatibility it left the old TaskKilledException constructor which defaults to "unknown reason".
The range() operator should use the constructor that fills in the reason rather than dropping it on task kill.

## How was this patch tested?

Existing tests, and I tested this manually.

Author: Eric Liang <ekl@databricks.com>

Closes #17692 from ericl/fix-kill-reason-in-range.
2017-04-19 19:53:40 -07:00
..
catalyst [MINOR][SS] Fix a missing space in UnsupportedOperationChecker error message 2017-04-19 18:58:14 -07:00
core [SPARK-20398][SQL] range() operator should include cancellation reason when killed 2017-04-19 19:53:40 -07:00
hive [SPARK-20349][SQL] ListFunctions returns duplicate functions after using persistent functions 2017-04-17 09:50:20 -07:00
hive-thriftserver [SPARK-20316][SQL] Val and Var should strictly follow the Scala syntax 2017-04-15 10:34:57 +01:00
README.md [SPARK-16557][SQL] Remove stale doc in sql/README.md 2016-07-14 19:24:42 -07:00

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into four subprojects:

  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes an extension of SQLContext called HiveContext that allows users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allows users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.