spark-instrumented-optimizer/python/pyspark/streaming
Marcelo Vanzin 24d7d2e453 [SPARK-13579][BUILD] Stop building the main Spark assembly.
This change modifies the "assembly/" module to just copy needed
dependencies to its build directory, and modifies the packaging
script to pick those up (and remove duplicate jars packages in the
examples module).

I also made some minor adjustments to dependencies to remove some
test jars from the final packaging, and remove jars that conflict with each
other when packaged separately (e.g. servlet api).

Also note that this change restores guava in applications' classpaths, even
though it's still shaded inside Spark. This is now needed for the Hadoop
libraries that are packaged with Spark, which now are not processed by
the shade plugin.

Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #11796 from vanzin/SPARK-13579.
2016-04-04 16:52:22 -07:00
..
__init__.py [SPARK-6328][PYTHON] Python API for StreamingListener 2015-11-16 11:29:27 -08:00
context.py [SPARK-12652][PYSPARK] Upgrade Py4J to 0.9.1 2016-01-12 14:27:05 -08:00
dstream.py [SPARK-13339][DOCS] Clarify commutative / associative operator requirements for reduce, fold 2016-02-19 10:26:38 +00:00
flume.py [SPARK-14073][STREAMING][TEST-MAVEN] Move flume back to Spark 2016-03-25 17:37:16 -07:00
kafka.py [SPARK-13848][SPARK-5185] Update to Py4J 0.9.2 in order to fix classloading issue 2016-03-14 12:22:02 -07:00
kinesis.py [SPARK-13848][SPARK-5185] Update to Py4J 0.9.2 in order to fix classloading issue 2016-03-14 12:22:02 -07:00
listener.py [SPARK-6328][PYTHON] Python API for StreamingListener 2015-11-16 11:29:27 -08:00
tests.py [SPARK-13579][BUILD] Stop building the main Spark assembly. 2016-04-04 16:52:22 -07:00
util.py [SPARK-12652][PYSPARK] Upgrade Py4J to 0.9.1 2016-01-12 14:27:05 -08:00