spark-instrumented-optimizer/repl
Marcelo Vanzin b3417b731d [SPARK-16451][REPL] Fail shell if SparkSession fails to start.
Currently, in spark-shell, if the session fails to start, the
user sees a bunch of unrelated errors which are caused by code
in the shell initialization that references the "spark" variable,
which does not exist in that case. Things like:

```
<console>:14: error: not found: value spark
       import spark.sql
```

The user is also left with a non-working shell (unless they want
to just write non-Spark Scala or Python code, that is).

This change fails the whole shell session at the point where the
failure occurs, so that the last error message is the one with
the actual information about the failure.

For the python error handling, I moved the session initialization code
to session.py, so that traceback.print_exc() only shows the last error.
Otherwise, the printed exception would contain all previous exceptions
with a message "During handling of the above exception, another
exception occurred", making the actual error kinda hard to parse.

Tested with spark-shell, pyspark (with 2.7 and 3.5), by forcing an
error during SparkContext initialization.

Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #21368 from vanzin/SPARK-16451.
2018-06-05 08:29:29 +07:00
..
scala-2.11/src/main/scala/org/apache/spark/repl [SPARK-20706][SPARK-SHELL] Spark-shell not overriding method/variable definition 2017-12-05 18:08:36 -06:00
scala-2.12/src/main/scala/org/apache/spark/repl [SPARK-22572][SPARK SHELL] spark-shell does not re-initialize on :replay 2017-11-22 21:35:47 +09:00
src [SPARK-16451][REPL] Fail shell if SparkSession fails to start. 2018-06-05 08:29:29 +07:00
pom.xml [SPARK-23028] Bump master branch version to 2.4.0-SNAPSHOT 2018-01-13 00:37:59 +08:00