spark-instrumented-optimizer/sql/catalyst/src/main
Kazuaki Ishizaki 6f62e9d9b9 [SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB bytecode size limit
## What changes were proposed in this pull request?

When an expression for `df.filter()` has many nodes (e.g. 400), the size of Java bytecode for the generated Java code is more than 64KB. It produces an Java exception. As a result, the execution fails.
This PR continues to execute by calling `Expression.eval()` disabling code generation if an exception has been caught.

## How was this patch tested?

Add a test suite into `DataFrameSuite`

Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>

Closes #17087 from kiszk/SPARK-19372.
2017-05-16 14:47:21 -07:00
..
antlr4/org/apache/spark/sql/catalyst/parser [SPARK-20719][SQL] Support LIMIT ALL 2017-05-12 15:26:10 -07:00
java/org/apache/spark/sql [SPARK-20523][BUILD] Clean up build warnings for 2.2.0 release 2017-05-03 10:18:35 +01:00
scala/org/apache/spark/sql [SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB bytecode size limit 2017-05-16 14:47:21 -07:00