spark-instrumented-optimizer/sql/core/benchmarks
Anton Okolnychyi b45ff02e77
[SPARK-26203][SQL][TEST] Benchmark performance of In and InSet expressions
## What changes were proposed in this pull request?

This PR contains benchmarks for `In` and `InSet` expressions. They cover literals of different data types and will help us to decide where to integrate the switch-based logic for bytes/shorts/ints.

As discussed in [PR-23171](https://github.com/apache/spark/pull/23171), one potential approach is to convert `In` to `InSet` if all elements are literals independently of data types and the number of elements. According to the results of this PR, we might want to keep the threshold for the number of elements. The if-else approach approach might be faster for some data types on a small number of elements (structs? arrays? small decimals?).

### byte / short / int / long

Unless the number of items is really big, `InSet` is slower than `In` because of autoboxing .

Interestingly, `In` scales worse on bytes/shorts than on ints/longs. For example, `InSet` starts to match the performance on around 50 bytes/shorts while this does not happen on the same number of ints/longs. This is a bit strange as shorts/bytes (e.g., `(byte) 1`, `(short) 2`) are represented as ints in the bytecode.

### float / double

Use cases on floats/doubles also suffer from autoboxing. Therefore, `In` outperforms `InSet` on 10 elements.

Similarly to shorts/bytes, `In` scales worse on floats/doubles than on ints/longs because the equality condition is more complicated (e.g., `java.lang.Float.isNaN(filter_valueArg_0) && java.lang.Float.isNaN(9.0F)) || filter_valueArg_0 == 9.0F`).

### decimal

The reason why we have separate benchmarks for small and large decimals is that Spark might use longs to represent decimals in some cases.

If this optimization happens, then `equals` will be nothing else as comparing longs. If this does not happen, Spark will create an instance of `scala.BigDecimal` and use it for comparisons. The latter is more expensive.

`Decimal$hashCode` will always use `scala.BigDecimal$hashCode` even if the number is small enough to fit into a long variable. As a consequence, we see that use cases on small decimals are faster with `In` as they are using long comparisons under the hood. Large decimal values are always faster with `InSet`.

### string

`UTF8String$equals` is not cheap. Therefore, `In` does not really outperform `InSet` as in previous use cases.

### timestamp / date

Under the hood, timestamp/date values will be represented as long/int values. So, `In` allows us to avoid autoboxing.

### array

Arrays are working as expected. `In` is faster on 5 elements while `InSet` is faster on 15 elements. The benchmarks are using `UnsafeArrayData`.

### struct

`InSet` is always faster than `In` for structs. These benchmarks use `GenericInternalRow`.

Closes #23291 from aokolnychyi/spark-26203.

Lead-authored-by: Anton Okolnychyi <aokolnychyi@apple.com>
Co-authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2019-01-15 07:25:50 -07:00
..
AggregateBenchmark-results.txt [SPARK-25476][SPARK-25510][TEST] Refactor AggregateBenchmark and add a new trait to better support Dataset and DataFrame API 2018-10-01 07:32:40 -07:00
BloomFilterBenchmark-results.txt [SPARK-25589][SQL][TEST] Add BloomFilterBenchmark 2018-10-03 04:14:07 -07:00
BuiltInDataSourceWriteBenchmark-results.txt [SPARK-25663][SPARK-25661][SQL][TEST] Refactor BuiltInDataSourceWriteBenchmark, DataSourceWriteBenchmark and AvroWriteBenchmark to use main method 2018-10-31 03:03:42 -07:00
ColumnarBatchBenchmark-results.txt [SPARK-25481][SQL][TEST] Refactor ColumnarBatchBenchmark to use main method 2018-09-26 20:40:10 -07:00
CompressionSchemeBenchmark-results.txt [SPARK-25478][SQL][TEST] Refactor CompressionSchemeBenchmark to use main method 2018-09-23 20:46:40 -07:00
CSVBenchmark-results.txt [SPARK-25848][SQL][TEST] Refactor CSVBenchmarks to use main method 2018-10-30 09:18:55 -07:00
DatasetBenchmark-results.txt [SPARK-25479][TEST] Refactor DatasetBenchmark to use main method 2018-10-04 11:58:16 -07:00
DataSourceReadBenchmark-results.txt [SPARK-26584][SQL] Remove spark.sql.orc.copyBatchToSpark internal conf 2019-01-10 08:42:23 -08:00
ExternalAppendOnlyUnsafeRowArrayBenchmark-results.txt [SPARK-25484][SQL][TEST] Refactor ExternalAppendOnlyUnsafeRowArrayBenchmark 2019-01-09 09:54:21 -08:00
FilterPushdownBenchmark-results.txt [SPARK-25438][SQL][TEST] Fix FilterPushdownBenchmark to use the same memory assumption 2018-09-15 17:48:39 -07:00
HashedRelationMetricsBenchmark-results.txt [SPARK-26337][SQL][TEST] Add benchmark for LongToUnsafeRowMap 2018-12-14 10:50:48 +08:00
InExpressionBenchmark-results.txt [SPARK-26203][SQL][TEST] Benchmark performance of In and InSet expressions 2019-01-15 07:25:50 -07:00
JoinBenchmark-results.txt [SPARK-25664][SQL][TEST] Refactor JoinBenchmark to use main method 2018-10-12 16:08:12 -07:00
JSONBenchmark-results.txt [SPARK-25931][SQL] Benchmarking creation of Jackson parser 2018-11-03 09:09:39 -07:00
MiscBenchmark-results.txt [SPARK-25488][SQL][TEST] Refactor MiscBenchmark to use main method 2018-10-06 08:47:43 -07:00
PrimitiveArrayBenchmark-results.txt [SPARK-25487][SQL][TEST] Refactor PrimitiveArrayBenchmark 2018-09-21 15:04:47 +09:00
RangeBenchmark-results.txt [SPARK-25710][SQL] range should report metrics correctly 2018-10-13 13:55:28 +08:00
SortBenchmark-results.txt [SPARK-25486][TEST] Refactor SortBenchmark to use main method 2018-09-25 11:13:05 -07:00
UnsafeArrayDataBenchmark-results.txt [SPARK-25483][TEST] Refactor UnsafeArrayDataBenchmark to use main method 2018-10-03 04:20:02 -07:00
WideSchemaBenchmark-results.txt [SPARK-25492][TEST] Refactor WideSchemaBenchmark to use main method 2018-10-20 17:31:13 -07:00
WideTableBenchmark-results.txt [SPARK-25676][SQL][FOLLOWUP] Use 'foreach(_ => ())' 2018-11-08 23:37:14 +08:00