spark-instrumented-optimizer/sql/core/src/test
Davies Liu 3c0d2365d5 [SPARK-12796] [SQL] Whole stage codegen
This is the initial work for whole stage codegen, it support Projection/Filter/Range, we will continue work on this to support more physical operators.

A micro benchmark show that a query with range, filter and projection could be 3X faster then before.

It's turned on by default. For a tree that have at least two chained plans, a WholeStageCodegen will be inserted into it, for example, the following plan
```
Limit 10
+- Project [(id#5L + 1) AS (id + 1)#6L]
   +- Filter ((id#5L & 1) = 1)
      +- Range 0, 1, 4, 10, [id#5L]
```
will be translated into
```
Limit 10
+- WholeStageCodegen
      +- Project [(id#1L + 1) AS (id + 1)#2L]
         +- Filter ((id#1L & 1) = 1)
            +- Range 0, 1, 4, 10, [id#1L]
```

Here is the call graph to generate Java source for A and B (A  support codegen, but B does not):

```
  *   WholeStageCodegen       Plan A               FakeInput        Plan B
  * =========================================================================
  *
  * -> execute()
  *     |
  *  doExecute() -------->   produce()
  *                             |
  *                          doProduce()  -------> produce()
  *                                                   |
  *                                                doProduce() ---> execute()
  *                                                   |
  *                                                consume()
  *                          doConsume()  ------------|
  *                             |
  *  doConsume()  <-----    consume()
```

A SparkPlan that support codegen need to implement doProduce() and doConsume():

```
def doProduce(ctx: CodegenContext): (RDD[InternalRow], String)
def doConsume(ctx: CodegenContext, child: SparkPlan, input: Seq[ExprCode]): String
```

Author: Davies Liu <davies@databricks.com>

Closes #10735 from davies/whole2.
2016-01-16 10:29:27 -08:00
..
avro [SPARK-10136] [SQL] Fixes Parquet support for Avro array of primitive array 2015-08-20 11:00:29 -07:00
gen-java/org/apache/spark/sql/execution/datasources/parquet/test/avro [SPARK-10136] [SQL] Fixes Parquet support for Avro array of primitive array 2015-08-20 11:00:29 -07:00
java/test/org/apache/spark/sql [SPARK-12756][SQL] use hash expression in Exchange 2016-01-13 22:43:28 -08:00
resources [SPARK-12833][SQL] Initial import of spark-csv 2016-01-15 11:46:46 -08:00
scala/org/apache/spark/sql [SPARK-12796] [SQL] Whole stage codegen 2016-01-16 10:29:27 -08:00
scripts [SPARK-9407] [SQL] Relaxes Parquet ValidTypeMap to allow ENUM predicates to be pushed down 2015-08-12 20:01:34 +08:00
thrift [SPARK-9407] [SQL] Relaxes Parquet ValidTypeMap to allow ENUM predicates to be pushed down 2015-08-12 20:01:34 +08:00
README.md [SPARK-9407] [SQL] Relaxes Parquet ValidTypeMap to allow ENUM predicates to be pushed down 2015-08-12 20:01:34 +08:00

Notes for Parquet compatibility tests

The following directories and files are used for Parquet compatibility tests:

.
├── README.md                   # This file
├── avro
│   ├── *.avdl                  # Testing Avro IDL(s)
│   └── *.avpr                  # !! NO TOUCH !! Protocol files generated from Avro IDL(s)
├── gen-java                    # !! NO TOUCH !! Generated Java code
├── scripts
│   ├── gen-avro.sh             # Script used to generate Java code for Avro
│   └── gen-thrift.sh           # Script used to generate Java code for Thrift
└── thrift
    └── *.thrift                # Testing Thrift schema(s)

To avoid code generation during build time, Java code generated from testing Thrift schema and Avro IDL are also checked in.

When updating the testing Thrift schema and Avro IDL, please run gen-avro.sh and gen-thrift.sh accordingly to update generated Java code.

Prerequisites

Please ensure avro-tools and thrift are installed. You may install these two on Mac OS X via:

$ brew install thrift avro-tools