spark-instrumented-optimizer/sql
Tathagata Das bdfe60ac92 [SPARK-18416][STRUCTURED STREAMING] Fixed temp file leak in state store
## What changes were proposed in this pull request?

StateStore.get() causes temporary files to be created immediately, even if the store is not used to make updates for new version. The temp file is not closed as store.commit() is not called in those cases, thus keeping the output stream to temp file open forever.

This PR fixes it by opening the temp file only when there are updates being made.

## How was this patch tested?

New unit test

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #15859 from tdas/SPARK-18416.
2016-11-14 10:03:01 -08:00
..
catalyst [SPARK-18387][SQL] Add serialization to checkEvaluation. 2016-11-11 13:52:10 -08:00
core [SPARK-18416][STRUCTURED STREAMING] Fixed temp file leak in state store 2016-11-14 10:03:01 -08:00
hive [SPARK-17982][SQL] SQLBuilder should wrap the generated SQL with parenthesis for LIMIT 2016-11-11 13:28:18 -08:00
hive-thriftserver [SPARK-14914][CORE] Fix Resource not closed after using, for unit tests and example 2016-11-10 10:54:36 +00:00
README.md [SPARK-16557][SQL] Remove stale doc in sql/README.md 2016-07-14 19:24:42 -07:00

Spark SQL

This module provides support for executing relational queries expressed in either SQL or the DataFrame/Dataset API.

Spark SQL is broken up into four subprojects:

  • Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
  • Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs. This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  • Hive Support (sql/hive) - Includes an extension of SQLContext called HiveContext that allows users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes. There are also wrappers that allows users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  • HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.