Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Marco Gaido 44a71741d5 [SPARK-25379][SQL] Improve AttributeSet and ColumnPruning performance
## What changes were proposed in this pull request?

This PR contains 3 optimizations:
 1)  it improves significantly the operation `--` on `AttributeSet`. As a benchmark for the `--` operation, the following code has been run
```
test("AttributeSet -- benchmark") {
    val attrSetA = AttributeSet((1 to 100).map { i => AttributeReference(s"c$i", IntegerType)() })
    val attrSetB = AttributeSet(attrSetA.take(80).toSeq)
    val attrSetC = AttributeSet((1 to 100).map { i => AttributeReference(s"c2_$i", IntegerType)() })
    val attrSetD = AttributeSet((attrSetA.take(50) ++ attrSetC.take(50)).toSeq)
    val attrSetE = AttributeSet((attrSetC.take(50) ++ attrSetA.take(50)).toSeq)
    val n_iter = 1000000
    val t0 = System.nanoTime()
    (1 to n_iter) foreach { _ =>
      val r1 = attrSetA -- attrSetB
      val r2 = attrSetA -- attrSetC
      val r3 = attrSetA -- attrSetD
      val r4 = attrSetA -- attrSetE
    }
    val t1 = System.nanoTime()
    val totalTime = t1 - t0
    println(s"Average time: ${totalTime / n_iter} us")
  }
```
The results are:
```
Before PR - Average time: 67674 us (100  %)
After PR -  Average time: 28827 us (42.6 %)
```
2) In `ColumnPruning`, it replaces the occurrences of `(attributeSet1 -- attributeSet2).nonEmpty` with `attributeSet1.subsetOf(attributeSet2)` which is order of magnitudes more efficient (especially where there are many attributes). Running the previous benchmark replacing `--` with `subsetOf` returns:
```
Average time: 67 us (0.1 %)
```

3) Provides a more efficient way of building `AttributeSet`s, which can greatly improve the performance of the methods `references` and `outputSet` of `Expression` and `QueryPlan`. This basically avoids unneeded operations (eg. creating many `AttributeEqual` wrapper classes which could be avoided)

The overall effect of those optimizations has been tested on `ColumnPruning` with the following benchmark:

```
test("ColumnPruning benchmark") {
    val attrSetA = (1 to 100).map { i => AttributeReference(s"c$i", IntegerType)() }
    val attrSetB = attrSetA.take(80)
    val attrSetC = attrSetA.take(20).map(a => Alias(Add(a, Literal(1)), s"${a.name}_1")())

    val input = LocalRelation(attrSetA)
    val query1 = Project(attrSetB, Project(attrSetA, input)).analyze
    val query2 = Project(attrSetC, Project(attrSetA, input)).analyze
    val query3 = Project(attrSetA, Project(attrSetA, input)).analyze
    val nIter = 100000
    val t0 = System.nanoTime()
    (1 to nIter).foreach { _ =>
      ColumnPruning(query1)
      ColumnPruning(query2)
      ColumnPruning(query3)
    }
    val t1 = System.nanoTime()
    val totalTime = t1 - t0
    println(s"Average time: ${totalTime / nIter} us")
}
```

The output of the test is:

```
Before PR - Average time: 733471 us (100  %)
After PR  - Average time: 362455 us (49.4 %)
```

The performance improvement has been evaluated also on the `SQLQueryTestSuite`'s queries:

```
(before) org.apache.spark.sql.catalyst.optimizer.ColumnPruning                                              518413198 / 1377707172                          2756 / 15717
(after)  org.apache.spark.sql.catalyst.optimizer.ColumnPruning                                              415432579 / 1121147950                          2756 / 15717
% Running time                                                                                                  80.1% / 81.3%
```

Also other rules benefit especially from (3), despite the impact is lower, eg:
```
(before) org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences                                  307341442 / 623436806                           2154 / 16480
(after)  org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences                                  290511312 / 560962495                           2154 / 16480
% Running time                                                                                                  94.5% / 90.0%
```

The reason why the impact on the `SQLQueryTestSuite`'s queries is lower compared to the other benchmark is that the optimizations are more significant when the number of attributes involved is higher. Since in the tests we often have very few attributes, the effect there is lower.

## How was this patch tested?

run benchmarks + existing UTs

Closes #22364 from mgaido91/SPARK-25379.

Authored-by: Marco Gaido <marcogaido91@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2018-09-26 21:34:18 +08:00
.github [SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site 2016-11-23 11:25:47 +00:00
assembly [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
bin [SPARK-24433][K8S] Initial R Bindings for SparkR on K8s 2018-08-17 16:04:02 -07:00
build [SPARK-25335][BUILD] Skip Zinc downloading if it's installed in the system 2018-09-05 15:41:45 -07:00
common [SPARK-24355] Spark external shuffle server improvement to better handle block fetch requests. 2018-09-21 09:05:56 -05:00
conf [SPARK-22466][SPARK SUBMIT] export SPARK_CONF_DIR while conf is default 2017-11-09 14:33:08 +09:00
core [SPARK-25422][CORE] Don't memory map blocks streamed to disk. 2018-09-26 08:45:27 +08:00
data [SPARK-22666][ML][SQL] Spark datasource for image format 2018-09-05 11:59:00 -07:00
dev [SPARK-25494][SQL] Upgrade Spark's use of Janino to 3.0.10 2018-09-20 22:15:52 -07:00
docs [SPARK-19724][SQL] allowCreatingManagedTableUsingNonemptyLocation should have legacy prefix 2018-09-21 09:45:41 -07:00
examples [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
external [SPARK-25495][SS] FetchedData.reset should reset all fields 2018-09-25 11:42:27 -07:00
graphx [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
hadoop-cloud [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
launcher [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
licenses [SPARK-24654][BUILD] Update, fix LICENSE and NOTICE, and specialize for source vs binary 2018-06-30 19:27:16 -05:00
licenses-binary [SPARK-23654][BUILD] remove jets3t as a dependency of spark 2018-08-16 12:34:23 -07:00
mllib [SPARK-25489][ML][TEST] Refactor UDTSerializationBenchmark 2018-09-23 13:34:06 -07:00
mllib-local [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
project [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
python [SPARK-25514][SQL] Generating pretty JSON by to_json 2018-09-26 09:52:15 +08:00
R [SPARK-25514][SQL] Generating pretty JSON by to_json 2018-09-26 09:52:15 +08:00
repl [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
resource-managers [SPARK-25291][K8S] Fixing Flakiness of Executor Pod tests 2018-09-18 11:43:35 -07:00
sbin [PYSPARK] Update py4j to version 0.10.7. 2018-05-09 10:47:35 -07:00
sql [SPARK-25379][SQL] Improve AttributeSet and ColumnPruning performance 2018-09-26 21:34:18 +08:00
streaming [SPARK-23200] Reset Kubernetes-specific config on Checkpoint restore 2018-09-18 22:08:50 -07:00
tools [SPARK-25436] Bump master branch version to 2.5.0-SNAPSHOT 2018-09-15 16:24:02 -07:00
.gitattributes [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
.gitignore [MINOR] Add .crc files to .gitignore 2018-08-22 01:00:06 +08:00
.travis.yml [SPARK-18278][SCHEDULER] Spark on Kubernetes - Basic Scheduler Backend 2017-11-28 23:02:09 -08:00
appveyor.yml [MINOR][BUILD] Remove -Phive-thriftserver profile within appveyor.yml 2018-07-30 10:01:18 +08:00
CONTRIBUTING.md [SPARK-18073][DOCS][WIP] Migrate wiki to spark.apache.org web site 2016-11-23 11:25:47 +00:00
LICENSE [SPARK-24654][BUILD] Update, fix LICENSE and NOTICE, and specialize for source vs binary 2018-06-30 19:27:16 -05:00
LICENSE-binary [SPARK-23654][BUILD] remove jets3t as a dependency of spark 2018-08-16 12:34:23 -07:00
NOTICE [SPARK-23654][BUILD] remove jets3t as a dependency of spark 2018-08-16 12:34:23 -07:00
NOTICE-binary [SPARK-23654][BUILD] remove jets3t as a dependency of spark 2018-08-16 12:34:23 -07:00
pom.xml [SPARK-25494][SQL] Upgrade Spark's use of Janino to 3.0.10 2018-09-20 22:15:52 -07:00
README.md [DOC] Update some outdated links 2018-09-04 04:39:55 -07:00
scalastyle-config.xml [SPARK-24919][BUILD] New linter rule for sparkContext.hadoopConfiguration 2018-07-26 16:50:59 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

build/mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.)

You can build Spark using more than one thread by using the -T option with Maven, see "Parallel builds in Maven 3". More detailed documentation is available from the project site, at "Building Spark".

For general development tips, including info on developing Spark using an IDE, see "Useful Developer Tools".

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run tests for a module, or individual tests.

There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version and Enabling YARN" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.

Configuration

Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.

Contributing

Please review the Contribution to Spark guide for information on how to get started contributing to the project.