[SPARK-33165][SQL][TEST] Remove dependencies(scalatest,scalactic) from Benchmark

### What changes were proposed in this pull request?

This PR proposes to remove `assert` from `Benchmark` for making it easier to run benchmark codes via `spark-submit`.

### Why are the changes needed?

Since the current `Benchmark` (`master` and `branch-3.0`) has `assert`, we need to pass the proper jars of `scalatest` and `scalactic`;
 - scalatest-core_2.12-3.2.0.jar
 - scalatest-compatible-3.2.0.jar
 - scalactic_2.12-3.0.jar
```
./bin/spark-submit --jars scalatest-core_2.12-3.2.0.jar,scalatest-compatible-3.2.0.jar,scalactic_2.12-3.0.jar,./sql/catalyst/target/spark-catalyst_2.12-3.1.0-SNAPSHOT-tests.jar,./core/target/spark-core_2.12-3.1.0-SNAPSHOT-tests.jar --class org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmark ./sql/core/target/spark-sql_2.12-3.1.0-SNAPSHOT-tests.jar --data-location /tmp/tpcds-sf1
```

This update can make developers submit benchmark codes without these dependencies;
```
./bin/spark-submit --jars ./sql/catalyst/target/spark-catalyst_2.12-3.1.0-SNAPSHOT-tests.jar,./core/target/spark-core_2.12-3.1.0-SNAPSHOT-tests.jar --class org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmark ./sql/core/target/spark-sql_2.12-3.1.0-SNAPSHOT-tests.jar --data-location /tmp/tpcds-sf1
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manually checked.

Closes #30064 from maropu/RemoveDepInBenchmark.

Authored-by: Takeshi Yamamuro <yamamuro@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
This commit is contained in:
Takeshi Yamamuro 2020-10-16 11:39:09 +09:00 committed by HyukjinKwon
parent bf594a9788
commit a5c17de241
2 changed files with 2 additions and 6 deletions

View file

@ -26,7 +26,6 @@ import scala.util.Try
import org.apache.commons.io.output.TeeOutputStream
import org.apache.commons.lang3.SystemUtils
import org.scalatest.Assertions._
import org.apache.spark.util.Utils
@ -162,7 +161,6 @@ private[spark] class Benchmark(
// scalastyle:off
println(s" Stopped after $i iterations, ${NANOSECONDS.toMillis(runTimes.sum)} ms")
// scalastyle:on
assert(runTimes.nonEmpty)
val best = runTimes.min
val avg = runTimes.sum / runTimes.size
val stdev = if (runTimes.size > 1) {
@ -184,18 +182,15 @@ private[spark] object Benchmark {
private var timeStart: Long = 0L
def startTiming(): Unit = {
assert(timeStart == 0L, "Already started timing.")
timeStart = System.nanoTime
}
def stopTiming(): Unit = {
assert(timeStart != 0L, "Have not started timing.")
accumulatedTime += System.nanoTime - timeStart
timeStart = 0L
}
def totalTime(): Long = {
assert(timeStart == 0L, "Have not stopped timing.")
accumulatedTime
}
}

View file

@ -31,7 +31,8 @@ import org.apache.spark.sql.execution.datasources.LogicalRelation
* To run this:
* {{{
* 1. without sbt:
* bin/spark-submit --class <this class> <spark sql test jar> --data-location <location>
* bin/spark-submit --jars <spark core test jar>,<spark catalyst test jar>
* --class <this class> <spark sql test jar> --data-location <location>
* 2. build/sbt "sql/test:runMain <this class> --data-location <TPCDS data location>"
* 3. generate result: SPARK_GENERATE_BENCHMARK_FILES=1 build/sbt
* "sql/test:runMain <this class> --data-location <location>"