Commit graph

14043 commits

Author SHA1 Message Date
Andrew Ray 36282f78b8 [SPARK-12184][PYTHON] Make python api doc for pivot consistant with scala doc
In SPARK-11946 the API for pivot was changed a bit and got updated doc, the doc changes were not made for the python api though. This PR updates the python doc to be consistent.

Author: Andrew Ray <ray.andrew@gmail.com>

Closes #10176 from aray/sql-pivot-python-doc.
2015-12-07 15:01:00 -08:00
tedyu 84b809445f [SPARK-11884] Drop multiple columns in the DataFrame API
See the thread Ben started:
http://search-hadoop.com/m/q3RTtveEuhjsr7g/

This PR adds drop() method to DataFrame which accepts multiple column names

Author: tedyu <yuzhihong@gmail.com>

Closes #9862 from ted-yu/master.
2015-12-07 14:58:09 -08:00
Xusen Yin 871e85d9c1 [SPARK-11963][DOC] Add docs for QuantileDiscretizer
https://issues.apache.org/jira/browse/SPARK-11963

Author: Xusen Yin <yinxusen@gmail.com>

Closes #9962 from yinxusen/SPARK-11963.
2015-12-07 13:16:47 -08:00
Shixiong Zhu 3f4efb5c23 [SPARK-12060][CORE] Avoid memory copy in JavaSerializerInstance.serialize
Merged #10051 again since #10083 is resolved.

This reverts commit 328b757d5d.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #10167 from zsxwing/merge-SPARK-12060.
2015-12-07 12:01:09 -08:00
Tathagata Das 5d80d8c6a5 [SPARK-11932][STREAMING] Partition previous TrackStateRDD if partitioner not present
The reason is that TrackStateRDDs generated by trackStateByKey expect the previous batch's TrackStateRDDs to have a partitioner. However, when recovery from DStream checkpoints, the RDDs recovered from RDD checkpoints do not have a partitioner attached to it. This is because RDD checkpoints do not preserve the partitioner (SPARK-12004).

While #9983 solves SPARK-12004 by preserving the partitioner through RDD checkpoints, there may be a non-zero chance that the saving and recovery fails. To be resilient, this PR repartitions the previous state RDD if the partitioner is not detected.

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #9988 from tdas/SPARK-11932.
2015-12-07 11:03:59 -08:00
Davies Liu ef3f047c07 [SPARK-12132] [PYSPARK] raise KeyboardInterrupt inside SIGINT handler
Currently, the current line is not cleared by Cltr-C

After this patch
```
>>> asdfasdf^C
Traceback (most recent call last):
  File "~/spark/python/pyspark/context.py", line 225, in signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt
```

It's still worse than 1.5 (and before).

Author: Davies Liu <davies@databricks.com>

Closes #10134 from davies/fix_cltrc.
2015-12-07 11:00:25 -08:00
Sun Rui 39d677c8f1 [SPARK-12034][SPARKR] Eliminate warnings in SparkR test cases.
This PR:
1. Suppress all known warnings.
2. Cleanup test cases and fix some errors in test cases.
3. Fix errors in HiveContext related test cases. These test cases are actually not run previously due to a bug of creating TestHiveContext.
4. Support 'testthat' package version 0.11.0 which prefers that test cases be under 'tests/testthat'
5. Make sure the default Hadoop file system is local when running test cases.
6. Turn on warnings into errors.

Author: Sun Rui <rui.sun@intel.com>

Closes #10030 from sun-rui/SPARK-12034.
2015-12-07 10:38:17 -08:00
Davies Liu 9cde7d5fa8 [SPARK-12032] [SQL] Re-order inner joins to do join with conditions first
Currently, the order of joins is exactly the same as SQL query, some conditions may not pushed down to the correct join, then those join will become cross product and is extremely slow.

This patch try to re-order the inner joins (which are common in SQL query), pick the joins that have self-contain conditions first, delay those that does not have conditions.

After this patch, the TPCDS query Q64/65 can run hundreds times faster.

cc marmbrus nongli

Author: Davies Liu <davies@databricks.com>

Closes #10073 from davies/reorder_joins.
2015-12-07 10:34:18 -08:00
Burak Yavuz 6fd9e70e3e [SPARK-12106][STREAMING][FLAKY-TEST] BatchedWAL test transiently flaky when Jenkins load is high
We need to make sure that the last entry is indeed the last entry in the queue.

Author: Burak Yavuz <brkyvz@gmail.com>

Closes #10110 from brkyvz/batch-wal-test-fix.
2015-12-07 00:21:55 -08:00
Josh Rosen 80a824d36e [SPARK-12152][PROJECT-INFRA] Speed up Scalastyle checks by only invoking SBT once
Currently, `dev/scalastyle` invokes SBT four times, but these invocations can be replaced with a single invocation, saving about one minute of build time.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #10151 from JoshRosen/speed-up-scalastyle.
2015-12-06 17:35:01 -08:00
gatorsmile 49efd03bac [SPARK-12138][SQL] Escape \u in the generated comments of codegen
When \u appears in a comment block (i.e. in /**/), code gen will break. So, in Expression and CodegenFallback, we escape \u to \\u.

yhuai Please review it. I did reproduce it and it works after the fix. Thanks!

Author: gatorsmile <gatorsmile@gmail.com>

Closes #10155 from gatorsmile/escapeU.
2015-12-06 11:15:02 -08:00
gcc 04b6799932 [SPARK-12048][SQL] Prevent to close JDBC resources twice
Author: gcc <spark-src@condor.rhaag.ip>

Closes #10101 from rh99/master.
2015-12-06 16:27:40 +00:00
Yanbo Liang b6e8e63a0d [SPARK-12044][SPARKR] Fix usage of isnan, isNaN
1, Add ```isNaN``` to ```Column``` for SparkR. ```Column``` should has three related variable functions: ```isNaN, isNull, isNotNull```.
2, Replace ```DataFrame.isNaN``` with ```DataFrame.isnan``` at SparkR side. Because ```DataFrame.isNaN``` has been deprecated and will be removed at Spark 2.0.
<del>3, Add ```isnull``` to ```DataFrame``` for SparkR. ```DataFrame``` should has two related functions: ```isnan, isnull```.<del>

cc shivaram sun-rui felixcheung

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #10037 from yanboliang/spark-12044.
2015-12-05 22:51:05 -08:00
Yanbo Liang 6979edf4e1 [SPARK-12115][SPARKR] Change numPartitions() to getNumPartitions() to be consistent with Scala/Python
Change ```numPartitions()``` to ```getNumPartitions()``` to be consistent with Scala/Python.
<del>Note: If we can not catch up with 1.6 release, it will be breaking change for 1.7 that we also need to explain in release note.<del>

cc sun-rui felixcheung shivaram

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #10123 from yanboliang/spark-12115.
2015-12-05 16:39:01 -08:00
felixcheung 895b6c4747 [SPARK-11715][SPARKR] Add R support corr for Column Aggregration
Need to match existing method signature

Author: felixcheung <felixcheung_m@hotmail.com>

Closes #9680 from felixcheung/rcorr.
2015-12-05 16:00:12 -08:00
Sun Rui c8d0e160da [SPARK-11774][SPARKR] Implement struct(), encode(), decode() functions in SparkR.
Author: Sun Rui <rui.sun@intel.com>

Closes #9804 from sun-rui/SPARK-11774.
2015-12-05 15:49:51 -08:00
Sean Owen 7da6748519 [SPARK-11988][ML][MLLIB] Update JPMML to 1.2.7
Update JPMML pmml-model to 1.2.7

Author: Sean Owen <sowen@cloudera.com>

Closes #9972 from srowen/SPARK-11988.
2015-12-05 15:52:52 +00:00
Antonio Murgia e9c9ae22b9 [SPARK-11994][MLLIB] Word2VecModel load and save cause SparkException when model is bigger than spark.kryoserializer.buffer.max
Author: Antonio Murgia <antonio.murgia2@studio.unibo.it>

Closes #9989 from tmnd1991/SPARK-11932.
2015-12-05 15:42:02 +00:00
Yuhao Yang ee94b70ce5 [SPARK-12096][MLLIB] remove the old constraint in word2vec
jira: https://issues.apache.org/jira/browse/SPARK-12096

word2vec now can handle much bigger vocabulary.
The old constraint vocabSize.toLong * vectorSize < Ine.max / 8 should be removed.

new constraint is vocabSize.toLong * vectorSize < max array length (usually a little less than Int.MaxValue)

I tested with vocabsize over 18M and vectorsize = 100.

srowen jkbradley Sorry to miss this in last PR. I was reminded today.

Author: Yuhao Yang <hhbyyh@gmail.com>

Closes #10103 from hhbyyh/w2vCapacity.
2015-12-05 15:27:31 +00:00
Shixiong Zhu 3af53e61fd [SPARK-12084][CORE] Fix codes that uses ByteBuffer.array incorrectly
`ByteBuffer` doesn't guarantee all contents in `ByteBuffer.array` are valid. E.g, a ByteBuffer returned by `ByteBuffer.slice`. We should not use the whole content of `ByteBuffer` unless we know that's correct.

This patch fixed all places that use `ByteBuffer.array` incorrectly.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #10083 from zsxwing/bytebuffer-array.
2015-12-04 17:02:04 -08:00
rotems f30373f5ee [SPARK-12080][CORE] Kryo - Support multiple user registrators
Author: rotems <roter>

Closes #10078 from Botnaim/KryoMultipleCustomRegistrators.
2015-12-04 16:58:34 -08:00
meiyoula bbfc16ec9d [SPARK-12142][CORE]Reply false when container allocator is not ready and reset target
Using Dynamic Allocation function, when a new AM is starting, and ExecutorAllocationManager send RequestExecutor message to AM. If the container allocator is not ready, the whole app will hang on

Author: meiyoula <1039320815@qq.com>

Closes #10138 from XuTingjun/patch-1.
2015-12-04 16:50:43 -08:00
Josh Rosen b7204e1d41 [SPARK-12112][BUILD] Upgrade to SBT 0.13.9
We should upgrade to SBT 0.13.9, since this is a requirement in order to use SBT's new Maven-style resolution features (which will be done in a separate patch, because it's blocked by some binary compatibility issues in the POM reader plugin).

I also upgraded Scalastyle to version 0.8.0, which was necessary in order to fix a Scala 2.10.5 compatibility issue (see https://github.com/scalastyle/scalastyle/issues/156). The newer Scalastyle is slightly stricter about whitespace surrounding tokens, so I fixed the new style violations.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #10112 from JoshRosen/upgrade-to-sbt-0.13.9.
2015-12-05 08:15:30 +08:00
Marcelo Vanzin d64806b373 [SPARK-11314][BUILD][HOTFIX] Add exclusion for moved YARN classes.
Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #10147 from vanzin/SPARK-11314.
2015-12-04 13:05:07 -08:00
Burak Yavuz 302d68de87 [SPARK-12058][STREAMING][KINESIS][TESTS] fix Kinesis python tests
Python tests require access to the `KinesisTestUtils` file. When this file exists under src/test, python can't access it, since it is not available in the assembly jar.

However, if we move KinesisTestUtils to src/main, we need to add the KinesisProducerLibrary as a dependency. In order to avoid this, I moved KinesisTestUtils to src/main, and extended it with ExtendedKinesisTestUtils which is under src/test that adds support for the KPL.

cc zsxwing tdas

Author: Burak Yavuz <brkyvz@gmail.com>

Closes #10050 from brkyvz/kinesis-py.
2015-12-04 12:08:42 -08:00
Dmitry Erastov d0d8222778 [SPARK-6990][BUILD] Add Java linting script; fix minor warnings
This replaces https://github.com/apache/spark/pull/9696

Invoke Checkstyle and print any errors to the console, failing the step.
Use Google's style rules modified according to
https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
Some important checks are disabled (see TODOs in `checkstyle.xml`) due to
multiple violations being present in the codebase.

Suggest fixing those TODOs in a separate PR(s).

More on Checkstyle can be found on the [official website](http://checkstyle.sourceforge.net/).

Sample output (from [build 46345](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/46345/consoleFull)) (duplicated because I run the build twice with different profiles):

> Checkstyle checks failed at following occurrences:
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/UnsafeRowParquetRecordReader.java:[217,7] (coding) MissingSwitchDefault: switch without "default" clause.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[198,10] (modifier) ModifierOrder: 'protected' modifier out of order with the JLS suggestions.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/UnsafeRowParquetRecordReader.java:[217,7] (coding) MissingSwitchDefault: switch without "default" clause.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[198,10] (modifier) ModifierOrder: 'protected' modifier out of order with the JLS suggestions.
> [error] running /home/jenkins/workspace/SparkPullRequestBuilder2/dev/lint-java ; received return code 1

Also fix some of the minor violations that didn't require sweeping changes.

Apologies for the previous botched PRs - I finally figured out the issue.

cr: JoshRosen, pwendell

> I state that the contribution is my original work, and I license the work to the project under the project's open source license.

Author: Dmitry Erastov <derastov@gmail.com>

Closes #9867 from dskrvk/master.
2015-12-04 12:03:45 -08:00
Nong 95296d9b1a [SPARK-12089] [SQL] Fix memory corrupt due to freeing a page being referenced
When the spillable sort iterator was spilled, it was mistakenly keeping
the last page in memory rather than the current page. This causes the
current record to get corrupted.

Author: Nong <nong@cloudera.com>

Closes #10142 from nongli/spark-12089.
2015-12-04 10:01:20 -08:00
kaklakariada 17e4e021ae Add links howto to setup IDEs for developing spark
These links make it easier for new developers to work with Spark in their IDE.

Author: kaklakariada <kaklakariada@users.noreply.github.com>

Closes #10104 from kaklakariada/readme-developing-ide-gettting-started.
2015-12-04 14:43:16 +00:00
Tathagata Das 4106d80fb6 [SPARK-12122][STREAMING] Prevent batches from being submitted twice after recovering StreamingContext from checkpoint
Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #10127 from tdas/SPARK-12122.
2015-12-04 01:42:29 -08:00
Sun Rui 5011f264fb [SPARK-12104][SPARKR] collect() does not handle multiple columns with same name.
Author: Sun Rui <rui.sun@intel.com>

Closes #10118 from sun-rui/SPARK-12104.
2015-12-03 21:11:10 -08:00
Carson Wang b6e9963ee4 [SPARK-11206] Support SQL UI on the history server (resubmit)
Resubmit #9297 and #9991
On the live web UI, there is a SQL tab which provides valuable information for the SQL query. But once the workload is finished, we won't see the SQL tab on the history server. It will be helpful if we support SQL UI on the history server so we can analyze it even after its execution.

To support SQL UI on the history server:
1. I added an onOtherEvent method to the SparkListener trait and post all SQL related events to the same event bus.
2. Two SQL events SparkListenerSQLExecutionStart and SparkListenerSQLExecutionEnd are defined in the sql module.
3. The new SQL events are written to event log using Jackson.
4. A new trait SparkHistoryListenerFactory is added to allow the history server to feed events to the SQL history listener. The SQL implementation is loaded at runtime using java.util.ServiceLoader.

Author: Carson Wang <carson.wang@intel.com>

Closes #10061 from carsonwang/SqlHistoryUI.
2015-12-03 16:39:12 -08:00
Anderson de Andrade f434f36d50 [SPARK-12056][CORE] Create a TaskAttemptContext only after calling setConf.
TaskAttemptContext's constructor will clone the configuration instead of referencing it. Calling setConf after creating TaskAttemptContext makes any changes to the configuration made inside setConf unperceived by RecordReader instances.

As an example, Titan's InputFormat will change conf when calling setConf. They wrap their InputFormat around Cassandra's ColumnFamilyInputFormat, and append Cassandra's configuration. This change fixes the following error when using Titan's CassandraInputFormat with Spark:

*java.lang.RuntimeException: org.apache.thrift.protocol.TProtocolException: Required field 'keyspace' was not present! Struct: set_key space_args(keyspace:null)*

There's a discussion of this error here: https://groups.google.com/forum/#!topic/aureliusgraphs/4zpwyrYbGAE

Author: Anderson de Andrade <adeandrade@verticalscope.com>

Closes #10046 from adeandrade/newhadooprdd-fix.
2015-12-03 16:37:00 -08:00
felixcheung 2213441e5e [SPARK-12019][SPARKR] Support character vector for sparkR.init(), check param and fix doc
and add tests.
Spark submit expects comma-separated list

Author: felixcheung <felixcheung_m@hotmail.com>

Closes #10034 from felixcheung/sparkrinitdoc.
2015-12-03 13:25:20 -08:00
Tathagata Das a02d472773 [FLAKY-TEST-FIX][STREAMING][TEST] Make sure StreamingContexts are shutdown after test
Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #10124 from tdas/InputStreamSuite-flaky-test.
2015-12-03 12:00:09 -08:00
Nicholas Chammas ad7cea6f77 [SPARK-12107][EC2] Update spark-ec2 versions
I haven't created a JIRA. If we absolutely need one I'll do it, but I'm fine with not getting mentioned in the release notes if that's the only purpose it'll serve.

cc marmbrus - We should include this in 1.6-RC2 if there is one. I can open a second PR against branch-1.6 if necessary.

Author: Nicholas Chammas <nicholas.chammas@gmail.com>

Closes #10109 from nchammas/spark-ec2-versions.
2015-12-03 11:59:10 -08:00
Yanbo Liang d576e76bba [MINOR][ML] Use coefficients replace weights
Use ```coefficients``` replace ```weights```, I wish they are the last two.
mengxr

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #10065 from yanboliang/coefficients.
2015-12-03 11:37:34 -08:00
Andrew Or 688e521c28 [SPARK-12108] Make event logs smaller
**Problem.** Event logs in 1.6 were much bigger than 1.5. I ran page rank and the event log size in 1.6 was almost 5x that in 1.5. I did a bisect to find that the RDD callsite added in #9398 is largely responsible for this.

**Solution.** This patch removes the long form of the callsite (which is not used!) from the event log. This reduces the size of the event log significantly.

*Note on compatibility*: if this patch is to be merged into 1.6.0, then it won't break any compatibility. Otherwise, if it is merged into 1.6.1, then we might need to add more backward compatibility handling logic (currently does not exist yet).

Author: Andrew Or <andrew@databricks.com>

Closes #10115 from andrewor14/smaller-event-logs.
2015-12-03 11:09:29 -08:00
Shixiong Zhu 649be4fa45 [SPARK-12101][CORE] Fix thread pools that cannot cache tasks in Worker and AppClient
`SynchronousQueue` cannot cache any task. This issue is similar to #9978. It's an easy fix. Just use the fixed `ThreadUtils.newDaemonCachedThreadPool`.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #10108 from zsxwing/fix-threadpool.
2015-12-03 11:06:25 -08:00
jerryshao 7bc9e1db2c [SPARK-12059][CORE] Avoid assertion error when unexpected state transition met in Master
Downgrade to warning log for unexpected state transition.

andrewor14 please review, thanks a lot.

Author: jerryshao <sshao@hortonworks.com>

Closes #10091 from jerryshao/SPARK-12059.
2015-12-03 11:05:12 -08:00
Steve Loughran 8fa3e474a8 [SPARK-11314][YARN] add service API and test service for Yarn Cluster schedulers
This is purely the yarn/src/main and yarn/src/test bits of the YARN ATS integration: the extension model to load and run implementations of `SchedulerExtensionService` in the yarn cluster scheduler process —and to stop them afterwards.

There's duplication between the two schedulers, yarn-client and yarn-cluster, at least in terms of setting everything up, because the common superclass, `YarnSchedulerBackend` is in spark-core, and the extension services need the YARN app/attempt IDs.

If you look at how the the extension services are loaded, the case class `SchedulerExtensionServiceBinding` is used to pass in config info -currently just the spark context and the yarn IDs, of which one, the attemptID, will be null when running client-side. I'm passing in a case class to ensure that it would be possible in future to add extra arguments to the binding class, yet, as the method signature will not have changed, still be able to load existing services.

There's no functional extension service here, just one for testing. The real tests come in the bigger pull requests. At the same time, there's no restriction of this extension service purely to the ATS history publisher. Anything else that wants to listen to the spark context and publish events could use this, and I'd also consider writing one for the YARN-913 registry service, so that the URLs of the web UI would be locatable through that (low priority; would make more sense if integrated with a REST client).

There's no minicluster test. Given the test execution overhead of setting up minicluster tests, it'd  probably be better to add an extension service into one of the existing tests.

Author: Steve Loughran <stevel@hortonworks.com>

Closes #9182 from steveloughran/stevel/feature/SPARK-1537-service.
2015-12-03 10:33:06 -08:00
felixcheung 43c575cb17 [SPARK-12116][SPARKR][DOCS] document how to workaround function name conflicts with dplyr
shivaram

Author: felixcheung <felixcheung_m@hotmail.com>

Closes #10119 from felixcheung/rdocdplyrmasked.
2015-12-03 09:22:21 -08:00
microwishing 95b3cf125b [DOCUMENTATION][KAFKA] fix typo in kafka/OffsetRange.scala
this is to fix some typo in external/kafka/src/main/scala/org/apache/spark/streaming/kafka/OffsetRange.scala

Author: microwishing <wei.zhu@kaiyuandao.com>

Closes #10121 from microwishing/master.
2015-12-03 16:09:05 +00:00
Jeff Zhang 7470d9edbb [DOCUMENTATION][MLLIB] typo in mllib doc
\cc mengxr

Author: Jeff Zhang <zjffdu@apache.org>

Closes #10093 from zjffdu/mllib_typo.
2015-12-03 15:36:28 +00:00
Huaxin Gao 5349851f36 [SPARK-12088][SQL] check connection.isClosed before calling connection…
In Java Spec java.sql.Connection, it has
boolean getAutoCommit() throws SQLException
Throws:
SQLException - if a database access error occurs or this method is called on a closed connection

So if conn.getAutoCommit is called on a closed connection, a SQLException will be thrown. Even though the code catch the SQLException and program can continue, I think we should check conn.isClosed before calling conn.getAutoCommit to avoid the unnecessary SQLException.

Author: Huaxin Gao <huaxing@oc0558782468.ibm.com>

Closes #10095 from huaxingao/spark-12088.
2015-12-03 08:42:21 +00:00
Yin Huai ec2b6c26c9 [SPARK-12109][SQL] Expressions's simpleString should delegate to its toString.
https://issues.apache.org/jira/browse/SPARK-12109

The change of https://issues.apache.org/jira/browse/SPARK-11596 exposed the problem.
In the sql plan viz, the filter shows

![image](https://cloud.githubusercontent.com/assets/2072857/11547075/1a285230-9906-11e5-8481-2bb451e35ef1.png)

After changes in this PR, the viz is back to normal.
![image](https://cloud.githubusercontent.com/assets/2072857/11547080/2bc570f4-9906-11e5-8897-3b3bff173276.png)

Author: Yin Huai <yhuai@databricks.com>

Closes #10111 from yhuai/SPARK-12109.
2015-12-03 11:21:24 +08:00
Josh Rosen ae40253373 [SPARK-12082][FLAKY-TEST] Increase timeouts in NettyBlockTransferSecuritySuite
We should try increasing a timeout in NettyBlockTransferSecuritySuite in order to reduce that suite's flakiness in Jenkins.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #10113 from JoshRosen/SPARK-12082.
2015-12-03 11:12:02 +08:00
Xiangrui Meng 9bb695b7a8 [SPARK-12000] do not specify arg types when reference a method in ScalaDoc
This fixes SPARK-12000, verified on my local with JDK 7. It seems that `scaladoc` try to match method names and messed up with annotations.

cc: JoshRosen jkbradley

Author: Xiangrui Meng <meng@databricks.com>

Closes #10114 from mengxr/SPARK-12000.2.
2015-12-02 17:19:31 -08:00
Yadong Qi d0d7ec5330 [SPARK-12093][SQL] Fix the error of comment in DDLParser
Author: Yadong Qi <qiyadong2010@gmail.com>

Closes #10096 from watermen/patch-1.
2015-12-03 08:48:49 +08:00
Yu ISHIKAWA de07d06abe [SPARK-10266][DOCUMENTATION, ML] Fixed @Since annotation for ml.tunning
cc mengxr noel-smith

I worked on this issues based on https://github.com/apache/spark/pull/8729.
ehsanmok  thank you for your contricution!

Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Author: Ehsan M.Kermani <ehsanmo1367@gmail.com>

Closes #9338 from yu-iskw/JIRA-10266.
2015-12-02 14:15:54 -08:00
Josh Rosen 452690ba1c [SPARK-12001] Allow partially-stopped StreamingContext to be completely stopped
If `StreamingContext.stop()` is interrupted midway through the call, the context will be marked as stopped but certain state will have not been cleaned up. Because `state = STOPPED` will be set, subsequent `stop()` calls will be unable to finish stopping the context, preventing any new StreamingContexts from being created.

This patch addresses this issue by only marking the context as `STOPPED` once the `stop()` has successfully completed which allows `stop()` to be called a second time in order to finish stopping the context in case the original `stop()` call was interrupted.

I discovered this issue by examining logs from a failed Jenkins run in which this race condition occurred in `FailureSuite`, leaking an unstoppable context and causing all subsequent tests to fail.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #9982 from JoshRosen/SPARK-12001.
2015-12-02 13:44:01 -08:00