Disallow TRACE HTTP method in servlets
Author: Sean Owen <sowen@cloudera.com>
Closes#4765 from srowen/SPARK-5983 and squashes the following commits:
421b25b [Sean Owen] Disallow TRACE HTTP method in servlets
Remove unicode characters from MLlib file.
Author: Michael Griffiths <msjgriffiths@gmail.com>
Author: Griffiths, Michael (NYC-RPM) <michael.griffiths@reprisemedia.com>
Closes#4815 from msjgriffiths/SPARK-6063 and squashes the following commits:
bcd7de1 [Griffiths, Michael (NYC-RPM)] Change \u201D quote marks around 'theta' to standard single apostrophe (\x27)
38eb535 [Michael Griffiths] Merge pull request #2 from apache/master
b08e865 [Michael Griffiths] Merge pull request #1 from apache/master
This commit exists to close the following pull requests on Github:
Closes#1128 (close requested by 'srowen')
Closes#3425 (close requested by 'srowen')
Closes#4770 (close requested by 'srowen')
Closes#2813 (close requested by 'srowen')
pwendell tdas
This is the safer parts of PR #4754:
- SPARK-5979: All dependencies with the groupId `org.apache.spark` passed through `--packages`, were being excluded from the dependency tree on the assumption that they would be in the assembly jar. This is not the case, therefore the exclusion rules had to be defined more explicitly.
- SPARK-6032: Ivy prints a whole lot of logs while retrieving dependencies. These were printed to `System.out`. Moved the logging to `System.err`.
Author: Burak Yavuz <brkyvz@gmail.com>
Closes#4802 from brkyvz/simple-streaming-fix and squashes the following commits:
e0f38cb [Burak Yavuz] Merge branch 'master' of github.com:apache/spark into simple-streaming-fix
bad921c [Burak Yavuz] [SPARK-5979][SPARK-6032] Smaller safer fix
These may conflict with the classes already in the NM. We shouldn't
be repackaging them.
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes#4820 from vanzin/SPARK-6070 and squashes the following commits:
871b566 [Marcelo Vanzin] The "d'oh how didn't I think of it before" solution.
3cba946 [Marcelo Vanzin] Use profile instead, so that dependencies don't need to be explicitly listed.
7a18a1b [Marcelo Vanzin] [SPARK-6070] [yarn] Remove unneeded classes from shuffle service jar.
The _eq_ of DataType is not correct, class cache is not use correctly (created class can not be find by dataType), then it will create lots of classes (saved in _cached_cls), never released.
Also, all same DataType have same hash code, there will be many object in a dict with the same hash code, end with hash attach, it's very slow to access this dict (depends on the implementation of CPython).
This PR also improve the performance of inferSchema (avoid the unnecessary converter of object).
cc pwendell JoshRosen
Author: Davies Liu <davies@databricks.com>
Closes#4808 from davies/leak and squashes the following commits:
6a322a4 [Davies Liu] tests refactor
3da44fc [Davies Liu] fix __eq__ of Singleton
534ac90 [Davies Liu] add more checks
46999dc [Davies Liu] fix tests
d9ae973 [Davies Liu] fix memory leak in sql
This is a follow-up of #4720. By default, `spark-daemon.sh` writes PID files under `/tmp`, which makes it impossible to start multiple server instances simultaneously. This PR sets `SPARK_PID_DIR` to Spark home directory to workaround this problem.
Many thanks to chenghao-intel for pointing out this issue!
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4758)
<!-- Reviewable:end -->
Author: Cheng Lian <lian@databricks.com>
Closes#4758 from liancheng/thriftserver-pid-dir and squashes the following commits:
252fa0f [Cheng Lian] Uses temporary directory as Thrift server PID directory
1b3d1e3 [Cheng Lian] Sets SPARK_HOME as SPARK_PID_DIR when running Thrift server test suites
cc tdas .
Author: Saisai Shao <saisai.shao@intel.com>
Closes#4817 from jerryshao/signature-minor-fix and squashes the following commits:
eebfaac [Saisai Shao] Remove useless type parameter
Should pass spark context to save/load
CC: mengxr
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#4816 from jkbradley/ml-io-doc-fix and squashes the following commits:
83d369d [Joseph K. Bradley] added comment to save,load parts of ML guide examples
2841170 [Joseph K. Bradley] Fixed save,load calls in ML guide examples
`ApplicationMaster.reporterThread` and `ApplicationMaster.allocator` are accessed in multiple threads, so they should be marked as `volatile`.
Author: zsxwing <zsxwing@gmail.com>
Closes#4814 from zsxwing/SPARK-6059 and squashes the following commits:
17d9386 [zsxwing] Add volatile to ApplicationMaster's reporterThread and allocator
Because ApplicationMaster doesn't set SparkUncaughtExceptionHandler, the exception in the user class won't be logged. This PR added a `logError` for it.
Author: zsxwing <zsxwing@gmail.com>
Closes#4813 from zsxwing/SPARK-6058 and squashes the following commits:
806c932 [zsxwing] Log the user class exception
For detail description, pls refer to [SPARK-6036](https://issues.apache.org/jira/browse/SPARK-6036).
Author: Zhang, Liye <liye.zhang@intel.com>
Closes#4785 from liyezhang556520/EventLogInProcess and squashes the following commits:
8b0b0a6 [Zhang, Liye] stop listener after DAGScheduler
79b15b3 [Zhang, Liye] SPARK-6036 avoid race condition between eventlogListener and akka actor system
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopped application's directories.
The original description about the cleanup behavior is incorrect.
Author: 许鹏 <peng.xu@fraudmetrix.cn>
Closes#4803 from hseagle/spark-6033 and squashes the following commits:
927a6a0 [许鹏] fix the incorrect description about the spark.worker.cleanup in standalone mode
The warning of deprecated configs is actually done when the configs are set, not when they are get. As a result we don't need to explicitly call `translateConfKey` outside of `SparkConf` just to print the warning again in vain.
Author: Andrew Or <andrew@databricks.com>
Closes#4797 from andrewor14/warn-deprecated-config and squashes the following commits:
8fb43e6 [Andrew Or] Privatize SparkConf.translateConfKey
As agreed in PR #1160 adding test to verify if history server generates relative links to applications.
Author: Lukasz Jastrzebski <lukasz.jastrzebski@gmail.com>
Closes#4778 from elyast/master and squashes the following commits:
0c07fab [Lukasz Jastrzebski] Incorporating comments for SPARK-2168
6d7866d [Lukasz Jastrzebski] Adjusting test for SPARK-2168 for master branch
d6f4fbe [Lukasz Jastrzebski] Added test for SPARK-2168
Add application kill function in master web UI for standalone mode. Details can be seen in [SPARK-5495](https://issues.apache.org/jira/browse/SPARK-5495).
The snapshot of UI shows as below:
![snapshot](https://dl.dropboxusercontent.com/u/19230832/master_ui.png)
Please help to review, thanks a lot.
Author: jerryshao <saisai.shao@intel.com>
Closes#4288 from jerryshao/SPARK-5495 and squashes the following commits:
fa3e486 [jerryshao] Add some conditions
9a7be93 [jerryshao] Add kill Driver function
a239776 [jerryshao] Change the code format
ff5195d [jerryshao] Add app kill function in master web UI
cc andrewor14, srowen.
Author: jerryshao <saisai.shao@intel.com>
Closes#4800 from jerryshao/SPARK-5771 and squashes the following commits:
a2483c2 [jerryshao] Change the UI of Requested Cores into * if default cores is not set
JIRA: https://issues.apache.org/jira/browse/SPARK-6024
Author: Yin Huai <yhuai@databricks.com>
Closes#4795 from yhuai/wideSchema and squashes the following commits:
4882e6f [Yin Huai] Address comments.
73e71b4 [Yin Huai] Address comments.
143927a [Yin Huai] Simplify code.
cc1d472 [Yin Huai] Make the schema wider.
12bacae [Yin Huai] If the JSON string of a schema is too large, split it before storing it in metastore.
e9b4f70 [Yin Huai] Failed test.
`FilteringParquetRowInputFormat` manually merges Parquet schemas before computing splits. However, it is duplicate because the schemas are already merged in `ParquetRelation2`. We don't need to re-merge them at `InputFormat`.
Author: Liang-Chi Hsieh <viirya@gmail.com>
Closes#4786 from viirya/dup_parquet_schemas_merge and squashes the following commits:
ef78a5a [Liang-Chi Hsieh] Avoiding duplicate Parquet schema merging.
If a blockManager has not send heartBeat more than 120s, BlockManagerMasterActor will remove it. But coarseGrainedSchedulerBackend can only remove executor after an DisassociatedEvent. We should expireDeadHosts at HeartbeatReceiver.
Author: Hong Shen <hongshen@tencent.com>
Closes#4363 from shenh062326/my_change3 and squashes the following commits:
2c9a46a [Hong Shen] Change some code style.
1a042ff [Hong Shen] Change some code style.
2dc456e [Hong Shen] Change some code style.
d221493 [Hong Shen] Fix test failed
7448ac6 [Hong Shen] A minor change in sparkContext and heartbeatReceiver
b904aed [Hong Shen] Fix failed test
52725af [Hong Shen] Remove assert in SparkContext.killExecutors
5bedcb8 [Hong Shen] Remove assert in SparkContext.killExecutors
a858fb5 [Hong Shen] A minor change in HeartbeatReceiver
3e221d9 [Hong Shen] A minor change in HeartbeatReceiver
6bab7aa [Hong Shen] Change a code style.
07952f3 [Hong Shen] Change configs name and code style.
ce9257e [Hong Shen] Fix test failed
bccd515 [Hong Shen] Fix test failed
8e77408 [Hong Shen] Fix test failed
c1dfda1 [Hong Shen] Fix test failed
e197e20 [Hong Shen] Fix test failed
fb5df97 [Hong Shen] Remove ExpireDeadHosts in BlockManagerMessages
b5c0441 [Hong Shen] Remove expireDeadHosts in BlockManagerMasterActor
c922cb0 [Hong Shen] Add expireDeadHosts in HeartbeatReceiver
Ensure scheduler delay handles unfinished task case, and ensure delay is never negative even due to rounding
Author: Sean Owen <sowen@cloudera.com>
Closes#4796 from srowen/SPARK-4579 and squashes the following commits:
ad6713c [Sean Owen] Ensure scheduler delay handles unfinished task case, and ensure delay is never negative even due to rounding
...ns#saveAsNewAPIHadoopDataset
Author: tedyu <yuzhihong@gmail.com>
Closes#4794 from tedyu/master and squashes the following commits:
2632a57 [tedyu] SPARK-6045 RecordWriter should be checked against null in PairRDDFunctions#saveAsNewAPIHadoopDataset
2d8d4b1 [tedyu] SPARK-6045 RecordWriter should be checked against null in PairRDDFunctions#saveAsNewAPIHadoopDataset
Remove unreachable driver memory properties in yarn client mode
Author: mohit.goyal <mohit.goyal@guavus.com>
Closes#4730 from zuxqoj/master and squashes the following commits:
977dc96 [mohit.goyal] remove not rechable deprecated variables in yarn client mode
The history server on Yarn only shows completed jobs. This adds a note concerning the needed explicit context termination at the end of a spark job which is a best practice anyway.
Related to SPARK-2972 and SPARK-3458
Author: moussa taifi <moutai10@gmail.com>
Closes#4721 from moutai/add-history-server-note-for-closing-the-spark-context and squashes the following commits:
9f5b6c3 [moussa taifi] Fix upper case typo for YARN
3ad3db4 [moussa taifi] Add context termination for History server on Yarn
Close appender saving stdout/stderr before destroying process to avoid exception on reading closed input stream.
(This also removes a redundant `waitFor()` although it was harmless)
CC tdas since I think you wrote this method.
Author: Sean Owen <sowen@cloudera.com>
Closes#4787 from srowen/SPARK-4300 and squashes the following commits:
e0cdabf [Sean Owen] Close appender saving stdout/stderr before destroying process to avoid exception on reading closed input stream
Author: Cheolsoo Park <cheolsoop@netflix.com>
Closes#4773 from piaozhexiu/SPARK-6018 and squashes the following commits:
2a919d5 [Cheolsoo Park] Rename e with cause to avoid duplicate names
1e71d2d [Cheolsoo Park] Replace placeholder with throwable
eb5750d [Cheolsoo Park] NoSuchMethodError in Spark app is swallowed by YARN AM
The problem with SPARK-6027 in short is that JARs like the kafka-assembly.jar does not work in python as the added JAR is not visible in the classloader used by Py4J. Py4J uses Class.forName(), which does not uses the systemclassloader, but the JARs are only visible in the Thread's contextclassloader. So this back uses the context class loader to create the KafkaUtils dstream object. This works for both cases where the Kafka libraries are added with --jars spark-streaming-kafka-assembly.jar or with --packages spark-streaming-kafka
Also improves the error message.
davies
Author: Tathagata Das <tathagata.das1565@gmail.com>
Closes#4779 from tdas/kafka-python-fix and squashes the following commits:
fb16b04 [Tathagata Das] Removed import
c1fdf35 [Tathagata Das] Fixed long line and improved documentation
7b88be8 [Tathagata Das] Fixed --jar not working for KafkaUtils and improved error message
The configuration is not supported in mesos mode now.
See https://github.com/apache/spark/pull/1462
Author: Li Zhihui <zhihui.li@intel.com>
Closes#4781 from li-zhihui/fixdocconf and squashes the following commits:
63e7a44 [Li Zhihui] Modify default value description for spark.scheduler.minRegisteredResourcesRatio on docs.
Join on output threads to make sure any lingering output from process reaches stdout, stderr before exiting
CC andrewor14 since I believe he created this section of code
Author: Sean Owen <sowen@cloudera.com>
Closes#4788 from srowen/SPARK-4704 and squashes the following commits:
ad7114e [Sean Owen] Join on output threads to make sure any lingering output from process reaches stdout, stderr before exiting
Removing elements from a mutable HashSet while iterating over it can cause the
iteration to incorrectly skip over entries that were not removed. If this
happened, PythonRDD would write fewer broadcast variables than the Python
worker was expecting to read, which would cause the Python worker to hang
indefinitely.
Author: Davies Liu <davies@databricks.com>
Closes#4776 from davies/fix_hang and squashes the following commits:
a4384a5 [Davies Liu] fix bug: remvoe() inside iterator is not safe
Since the validation error does not change monotonically, in practice, it should be proper to pick the best model when training GradientBoostedTrees with validation instead of stopping it early.
Author: Liang-Chi Hsieh <viirya@gmail.com>
Closes#4763 from viirya/gbt_record_model and squashes the following commits:
452e049 [Liang-Chi Hsieh] Address comment.
ea2fae2 [Liang-Chi Hsieh] Pick the best model when training GradientBoostedTrees with validation.
It is useful to let the user decide the number of rows to show in DataFrame.show
Author: Jacky Li <jacky.likun@huawei.com>
Closes#4767 from jackylk/show and squashes the following commits:
a0e0f4b [Jacky Li] fix testcase
7cdbe91 [Jacky Li] modify according to comment
bb54537 [Jacky Li] for Java compatibility
d7acc18 [Jacky Li] modify according to comments
981be52 [Jacky Li] add numRows param in DataFrame.show()
Cache the value of the local root dirs to use for storing local data,
so that the same directories are reused.
Also, to avoid an extra level of nesting, use a different env variable
to propagate the local dirs from the Worker to the executors. And make
the executor directory use a different name.
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes#4747 from vanzin/SPARK-5801 and squashes the following commits:
e0114e1 [Marcelo Vanzin] Update unit test.
18ee0a7 [Marcelo Vanzin] [SPARK-5801] [core] Avoid creating nested directories.
Please see JIRA (https://issues.apache.org/jira/browse/SPARK-6016) for details of the bug.
Author: Yin Huai <yhuai@databricks.com>
Closes#4775 from yhuai/parquetFooterCache and squashes the following commits:
78787b1 [Yin Huai] Remove footerCache in FilteringParquetRowInputFormat.
dff6fba [Yin Huai] Failed unit test.
Because windows on-default does not grant read permission to jars except to admin, spark-submit would fail with "ClassNotFound" exception if user runs slave service with only user permission.
This fix is to add read permission to owner of the jar (which would be the slave service account in windows )
Author: Judy Nash <judynash@microsoft.com>
Closes#4742 from judynash/SPARK-5914 and squashes the following commits:
e288e56 [Judy Nash] Fix spacing and refactor code
1de3c0e [Judy Nash] [SPARK-5914] Enable spark-submit to run requiring only user permission on windows
The model trained by ALS requires partitioning information to do quick lookup of a user/item factor for making recommendation on individual requests. In the new implementation, we didn't set partitioners in the factors returned by ALS, which would cause performance regression.
srowen coderxiang
Author: Xiangrui Meng <meng@databricks.com>
Closes#4748 from mengxr/SPARK-5976 and squashes the following commits:
9373a09 [Xiangrui Meng] add partitioner to factors returned by ALS
260f183 [Xiangrui Meng] add a test for partitioner
* Add GradientBoostedTrees Python examples to ML guide
* I ran these in the pyspark shell, and they worked.
* Add save/load to examples in ML guide
* Added note to python docs about predict,transform not working within RDD actions,transformations in some cases (See SPARK-5981)
CC: mengxr
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#4750 from jkbradley/SPARK-5974 and squashes the following commits:
c410e38 [Joseph K. Bradley] Added note to LabeledPoint about attributes
bcae18b [Joseph K. Bradley] Added import of models for save/load examples in ml guide. Fixed line length for tree.py, feature.py (but not other ML Pyspark files yet).
6d81c3e [Joseph K. Bradley] completed python GBT examples
9903309 [Joseph K. Bradley] Added note to python docs about predict,transform not working within RDD actions,transformations in some cases
c7dfad8 [Joseph K. Bradley] Added model save/load to ML guide. Added GBT examples to ML guide
DataFrame.explain return wrong result when the query is DDL command.
For example, the following two queries should print out the same execution plan, but it not.
sql("create table tb as select * from src where key > 490").explain(true)
sql("explain extended create table tb as select * from src where key > 490")
This is because DataFrame.explain leverage logicalPlan which had been forced executed, we should use the unexecuted plan queryExecution.logical.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#4707 from yanboliang/spark-5926 and squashes the following commits:
fa6db63 [Yanbo Liang] logicalPlan is not lazy
0e40a1b [Yanbo Liang] make DataFrame.explain leverage queryExecution.logical
Author: Liang-Chi Hsieh <viirya@gmail.com>
Closes#4760 from viirya/dup_literal and squashes the following commits:
06e7516 [Liang-Chi Hsieh] Remove duplicate Literal matching block.
`ReadContext.init` calls `InitContext.getMergedKeyValueMetadata`, which doesn't know how to merge conflicting user defined key-value metadata and throws exception. In our case, when dealing with different but compatible schemas, we have different Spark SQL schema JSON strings in different Parquet part-files, thus causes this problem. Reading similar Parquet files generated by Hive doesn't suffer from this issue.
In this PR, we manually merge the schemas before passing it to `ReadContext` to avoid the exception.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/4768)
<!-- Reviewable:end -->
Author: Cheng Lian <lian@databricks.com>
Closes#4768 from liancheng/spark-6010 and squashes the following commits:
9002f0a [Cheng Lian] Fixes SPARK-6010
use RELEASE_VERSION when building the Python API docs
Author: Davies Liu <davies@databricks.com>
Closes#4731 from davies/api_version and squashes the following commits:
c9744c9 [Davies Liu] Update create-release.sh
08cbc3f [Davies Liu] fix python docs
This metric is incomplete, because the files are memory mapped, so much of the read from disk occurs later as tasks actually read the file's data.
This should be merged into 1.3, so that we never expose this incorrect metric to users.
CC pwendell ksakellis sryza
Author: Kay Ousterhout <kayousterhout@gmail.com>
Closes#4749 from kayousterhout/SPARK-5982 and squashes the following commits:
9737b5e [Kay Ousterhout] More fixes
a1eb300 [Kay Ousterhout] Removed one more use of local read time
cf13497 [Kay Ousterhout] [SPARK-5982] Remove incorrectwq Local Read Time Metric
Fixes the issue whereby when VertexRDD's are `diff`ed, `innerJoin`ed, or `leftJoin`ed and have different partition sizes they fail under the `zipPartitions` method. This fix tests whether the partitions are equal or not and, if not, will repartition the other to match the partition size of the calling VertexRDD.
Author: Brennon York <brennon.york@capitalone.com>
Closes#4705 from brennonyork/SPARK-1955 and squashes the following commits:
0882590 [Brennon York] updated to properly handle differently-partitioned vertexRDDs
As documented in createDirectory, the result of createDirectory is not registered for automatic removal. Currently there are 4 directories left in `/tmp` after just running `pyspark`.
Author: Milan Straka <fox@ucw.cz>
Closes#4759 from foxik/remove-tmp-dirs and squashes the following commits:
280450d [Milan Straka] Use createTempDir in getOrCreateLocalRootDirs...
Clarify default max wait in spark.shuffle.io.retryWait docs
CC andrewor14
Author: Sean Owen <sowen@cloudera.com>
Closes#4769 from srowen/SPARK-5930 and squashes the following commits:
ae2792b [Sean Owen] Clarify default max wait in spark.shuffle.io.retryWait docs
Author: Michael Armbrust <michael@databricks.com>
Closes#4757 from marmbrus/udtConversions and squashes the following commits:
3714aad [Michael Armbrust] [SPARK-5996][SQL] Fix specialized outbound conversions