SparkSubmitDriverBootstrapper.scala now returns the exit code of the driver process, instead of always returning 0.
Author: Eric Eijkelenboom <ee@userreport.com>
Closes#2628 from ericeijkelenboom/master and squashes the following commits:
cc4a571 [Eric Eijkelenboom] Return the exit code of the driver process
pwendell, ```tryPort``` is not compatible with old code in last PR, this is to fix it.
And after discuss with srowen renamed the title to "avoid trying privileged port when request a non-privileged port". Plz refer to the discuss for detail.
Author: scwf <wangfei1@huawei.com>
Closes#2623 from scwf/1-1024 and squashes the following commits:
10a4437 [scwf] add comment
de3fd17 [scwf] do not try privileged port when request a non-privileged port
42cb0fa [scwf] make tryPort compatible with old code
cb8cc76 [scwf] do not use port 1 - 1024
If you turn authentication on and you are using a lot of executors. There is a chance that all the of the threads in the handleMessageExecutor could be waiting to send a message because they are blocked waiting on authentication to happen. This can cause a temporary deadlock until the connection times out.
To fix it, I got rid of the wait/notify and use a single outbox but only send security messages from it until authentication has completed.
Author: Thomas Graves <tgraves@apache.org>
Closes#2484 from tgravescs/cm_threads_auth and squashes the following commits:
a0a961d [Thomas Graves] give it a type
b6bc80b [Thomas Graves] Rework comments
d6d4175 [Thomas Graves] update from comments
081b765 [Thomas Graves] cleanup
4d7f8f5 [Thomas Graves] Change to not use wait/notify while waiting for authentication
If a block manager (say, A) wants to replicate a block and the node chosen for replication (say, B) is dead, then the attempt to send the block to B fails. However, this continues to fail indefinitely. Even if the driver learns about the demise of the B, A continues to try replicating to B and failing miserably.
The reason behind this bug is that A initially fetches a list of peers from the driver (when B was active), but never updates it after B is dead. This affects Spark Streaming as its receiver uses block replication.
The solution in this patch adds the following.
- Changed BlockManagerMaster to return all the peers of a block manager, rather than the requested number. It also filters out driver BlockManager.
- Refactored BlockManager's replication code to handle peer caching correctly.
+ The peer for replication is randomly selected. This is different from past behavior where for a node A, a node B was deterministically chosen for the lifetime of the application.
+ If replication fails to one node, the peers are refetched.
+ The peer cached has a TTL of 1 second to enable discovery of new peers and using them for replication.
- Refactored use of \<driver\> in BlockManager into a new method `BlockManagerId.isDriver`
- Added replication unit tests (replication was not tested till now, duh!)
This should not make a difference in performance of Spark workloads where replication is not used.
@andrewor14 @JoshRosen
Author: Tathagata Das <tathagata.das1565@gmail.com>
Closes#2366 from tdas/replication-fix and squashes the following commits:
9690f57 [Tathagata Das] Moved replication tests to a new BlockManagerReplicationSuite.
0661773 [Tathagata Das] Minor changes based on PR comments.
a55a65c [Tathagata Das] Added a unit test to test replication behavior.
012afa3 [Tathagata Das] Bug fix
89f91a0 [Tathagata Das] Minor change.
68e2c72 [Tathagata Das] Made replication peer selection logic more efficient.
08afaa9 [Tathagata Das] Made peer selection for replication deterministic to block id
3821ab9 [Tathagata Das] Fixes based on PR comments.
08e5646 [Tathagata Das] More minor changes.
d402506 [Tathagata Das] Fixed imports.
4a20531 [Tathagata Das] Filtered driver block manager from peer list, and also consolidated the use of <driver> in BlockManager.
7598f91 [Tathagata Das] Minor changes.
03de02d [Tathagata Das] Change replication logic to correctly refetch peers from master on failure and on new worker addition.
d081bf6 [Tathagata Das] Fixed bug in get peers and unit tests to test get-peers and replication under executor churn.
9f0ac9f [Tathagata Das] Modified replication tests to fail on replication bug.
af0c1da [Tathagata Das] Added replication unit tests to BlockManagerSuite
Author: scwf <wangfei1@huawei.com>
Closes#2632 from scwf/compress-doc and squashes the following commits:
7983a1a [scwf] snappy is the default compression codec for broadcast
Redone against the recent master branch (https://github.com/apache/spark/pull/1391)
Author: Nishkam Ravi <nravi@cloudera.com>
Author: nravi <nravi@c1704.halxg.cloudera.com>
Author: nishkamravi2 <nishkamravi@gmail.com>
Closes#2485 from nishkamravi2/master_nravi and squashes the following commits:
636a9ff [nishkamravi2] Update YarnAllocator.scala
8f76c8b [Nishkam Ravi] Doc change for yarn memory overhead
35daa64 [Nishkam Ravi] Slight change in the doc for yarn memory overhead
5ac2ec1 [Nishkam Ravi] Remove out
dac1047 [Nishkam Ravi] Additional documentation for yarn memory overhead issue
42c2c3d [Nishkam Ravi] Additional changes for yarn memory overhead issue
362da5e [Nishkam Ravi] Additional changes for yarn memory overhead
c726bd9 [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark into master_nravi
f00fa31 [Nishkam Ravi] Improving logging for AM memoryOverhead
1cf2d1e [nishkamravi2] Update YarnAllocator.scala
ebcde10 [Nishkam Ravi] Modify default YARN memory_overhead-- from an additive constant to a multiplier (redone to resolve merge conflicts)
2e69f11 [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark into master_nravi
efd688a [Nishkam Ravi] Merge branch 'master' of https://github.com/apache/spark
2b630f9 [nravi] Accept memory input as "30g", "512M" instead of an int value, to be consistent with rest of Spark
3bf8fad [nravi] Merge branch 'master' of https://github.com/apache/spark
5423a03 [nravi] Merge branch 'master' of https://github.com/apache/spark
eb663ca [nravi] Merge branch 'master' of https://github.com/apache/spark
df2aeb1 [nravi] Improved fix for ConcurrentModificationIssue (Spark-1097, Hadoop-10456)
6b840f0 [nravi] Undo the fix for SPARK-1758 (the problem is fixed)
5108700 [nravi] Fix in Spark for the Concurrent thread modification issue (SPARK-1097, HADOOP-10456)
681b36f [nravi] Fix for SPARK-1758: failing test org.apache.spark.JavaAPISuite.wholeTextFiles
We have changed the output format of `printSchema`. This PR will update our SQL programming guide to show the updated format. Also, it fixes a typo (the value type of `StructType` in Java API).
Author: Yin Huai <huai@cse.ohio-state.edu>
Closes#2630 from yhuai/sqlDoc and squashes the following commits:
267d63e [Yin Huai] Update the output of printSchema and fix a typo.
### Problem
The section "Using the shell" in Spark Programming Guide (https://spark.apache.org/docs/latest/programming-guide.html#using-the-shell) says that we can run pyspark REPL through IPython.
But a folloing command does not run IPython but a default Python executable.
```
$ IPYTHON=1 ./bin/pyspark
Python 2.7.8 (default, Jul 2 2014, 10:14:46)
...
```
the spark/bin/pyspark script on the commit b235e01363 decides which executable and options it use folloing way.
1. if PYSPARK_PYTHON unset
* → defaulting to "python"
2. if IPYTHON_OPTS set
* → set IPYTHON "1"
3. some python scripts passed to ./bin/pyspak → run it with ./bin/spark-submit
* out of this issues scope
4. if IPYTHON set as "1"
* → execute $PYSPARK_PYTHON (default: ipython) with arguments $IPYTHON_OPTS
* otherwise execute $PYSPARK_PYTHON
Therefore, when PYSPARK_PYTHON is unset, python is executed though IPYTHON is "1".
In other word, when PYSPARK_PYTHON is unset, IPYTHON_OPS and IPYTHON has no effect on decide which command to use.
PYSPARK_PYTHON | IPYTHON_OPTS | IPYTHON | resulting command | expected command
---- | ---- | ----- | ----- | -----
(unset → defaults to python) | (unset) | (unset) | python | (same)
(unset → defaults to python) | (unset) | 1 | python | ipython
(unset → defaults to python) | an_option | (unset → set to 1) | python an_option | ipython an_option
(unset → defaults to python) | an_option | 1 | python an_option | ipython an_option
ipython | (unset) | (unset) | ipython | (same)
ipython | (unset) | 1 | ipython | (same)
ipython | an_option | (unset → set to 1) | ipython an_option | (same)
ipython | an_option | 1 | ipython an_option | (same)
### Suggestion
The pyspark script should determine firstly whether a user wants to run IPython or other executables.
1. if IPYTHON_OPTS set
* set IPYTHON "1"
2. if IPYTHON has a value "1"
* PYSPARK_PYTHON defaults to "ipython" if not set
3. PYSPARK_PYTHON defaults to "python" if not set
See the pull request for more detailed modification.
Author: cocoatomo <cocoatomo77@gmail.com>
Closes#2554 from cocoatomo/issues/cannot-run-ipython-without-options and squashes the following commits:
d2a9b06 [cocoatomo] [SPARK-3706][PySpark] Use PYTHONUNBUFFERED environment variable instead of -u option
264114c [cocoatomo] [SPARK-3706][PySpark] Remove the sentence about deprecated environment variables
42e02d5 [cocoatomo] [SPARK-3706][PySpark] Replace environment variables used to customize execution of PySpark REPL
10d56fb [cocoatomo] [SPARK-3706][PySpark] Cannot run IPython REPL with IPYTHON set to "1" and PYSPARK_PYTHON unset
This change reorders the replicas returned by
HadoopRDD#getPreferredLocations so that replicas cached by HDFS are at
the start of the list. This requires Hadoop 2.5 or higher; previous
versions of Hadoop do not expose the information needed to determine
whether a replica is cached.
Author: Colin Patrick Mccabe <cmccabe@cloudera.com>
Closes#1486 from cmccabe/SPARK-1767 and squashes the following commits:
338d4f8 [Colin Patrick Mccabe] SPARK-1767: Prefer HDFS-cached replicas when scheduling data-local tasks
The following code gives error.
```
sqlContext.registerFunction("len", (s: String) => s.length)
sqlContext.sql("select len(foo) as a, count(1) from t1 group by len(foo)").collect()
```
Because SQl parser creates the aliases to the functions in grouping expressions with generated alias names. So if user gives the alias names to the functions inside projection then it does not match the generated alias name of grouping expression.
This kind of queries are working in Hive.
So the fix I have given that if user provides alias to the function in projection then don't generate alias in grouping expression,use the same alias.
Author: ravipesala <ravindra.pesala@huawei.com>
Closes#2511 from ravipesala/SPARK-3371 and squashes the following commits:
9fb973f [ravipesala] Removed aliases to grouping expressions.
f8ace79 [ravipesala] Fixed the testcase issue
bad2fd0 [ravipesala] SPARK-3371 : Fixed Renaming a function expression with group by gives error
This commit exists to close the following pull requests on Github:
Closes#1375 (close requested by 'pwendell')
Closes#476 (close requested by 'mengxr')
Closes#2502 (close requested by 'pwendell')
Closes#2391 (close requested by 'andrewor14')
FutureAction is the only type exposed through the async APIs, so
for job IDs to be useful they need to be exposed there. The complication
is that some async jobs run more than one job (e.g. takeAsync),
so the exposed ID has to actually be a list of IDs that can actually
change over time. So the interface doesn't look very nice, but...
Change is actually small, I just added a basic test to make sure
it works.
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes#2337 from vanzin/SPARK-3446 and squashes the following commits:
e166a68 [Marcelo Vanzin] Fix comment.
1fed2bc [Marcelo Vanzin] [SPARK-3446] Expose underlying job ids in FutureAction.
This patch forces use of commons http client 4.2 in Kinesis-asl profile so that the AWS SDK does not run into dependency conflicts
Author: aniketbhatnagar <aniket.bhatnagar@gmail.com>
Closes#2535 from aniketbhatnagar/Kinesis-HttpClient-Dep-Fix and squashes the following commits:
aa2079f [aniketbhatnagar] Merge branch 'Kinesis-HttpClient-Dep-Fix' of https://github.com/aniketbhatnagar/spark into Kinesis-HttpClient-Dep-Fix
73f55f6 [aniketbhatnagar] SPARK-3638 | Forced a compatible version of http client in kinesis-asl profile
70cc75b [aniketbhatnagar] deleted merge files
725dbc9 [aniketbhatnagar] Merge remote-tracking branch 'origin/Kinesis-HttpClient-Dep-Fix' into Kinesis-HttpClient-Dep-Fix
4ed61d8 [aniketbhatnagar] SPARK-3638 | Forced a compatible version of http client in kinesis-asl profile
9cd6103 [aniketbhatnagar] SPARK-3638 | Forced a compatible version of http client in kinesis-asl profile
case ```ShortType```, we should add short value to hive row. Int value may lead to some problems.
Author: scwf <wangfei1@huawei.com>
Closes#2551 from scwf/fix-addColumnValue and squashes the following commits:
08bcc59 [scwf] ColumnValue.shortValue for short type
This change avoids a NPE during context initialization when settings are present.
Author: Michael Armbrust <michael@databricks.com>
Closes#2583 from marmbrus/configNPE and squashes the following commits:
da2ec57 [Michael Armbrust] Do all hive session state initilialization in lazy val
Considering `Command.executeCollect()` simply delegates to `Command.sideEffectResult`, we no longer need to leave the latter `protected[sql]`.
Author: Cheng Lian <lian.cs.zju@gmail.com>
Closes#2431 from liancheng/narrow-scope and squashes the following commits:
1bfc16a [Cheng Lian] Made Command.sideEffectResult protected
BinaryType is derived from NativeType and added Ordering support.
Author: Venkata Ramana G <ramana.gollamudihuawei.com>
Author: Venkata Ramana Gollamudi <ramana.gollamudi@huawei.com>
Closes#2617 from gvramana/binarytype_sort and squashes the following commits:
1cf26f3 [Venkata Ramana Gollamudi] Supported Sorting of BinaryType
add case for VoidObjectInspector in ```inspectorToDataType```
Author: scwf <wangfei1@huawei.com>
Closes#2552 from scwf/inspectorToDataType and squashes the following commits:
453d892 [scwf] add case for VoidObjectInspector
The below query gives error
sql("SELECT k FROM (SELECT \`key\` AS \`k\` FROM src) a")
It gives error because the aliases are not cleaned so it could not be resolved in further processing.
Author: ravipesala <ravindra.pesala@huawei.com>
Closes#2594 from ravipesala/SPARK-3708 and squashes the following commits:
d55db54 [ravipesala] Fixed SPARK-3708 (Backticks aren't handled correctly is aliases)
https://issues.apache.org/jira/browse/SPARK-3658
And keep the `CLASS_NOT_FOUND_EXIT_STATUS` and exit message in `SparkSubmit.scala`.
Author: WangTaoTheTonic <barneystinson@aliyun.com>
Author: WangTao <barneystinson@aliyun.com>
Closes#2509 from WangTaoTheTonic/thriftserver and squashes the following commits:
5dcaab2 [WangTaoTheTonic] issue about coupling
8ad9f95 [WangTaoTheTonic] generalization
598e21e [WangTao] take thrift server as a daemon
Author: Michael Armbrust <michael@databricks.com>
Closes#2598 from marmbrus/hiveClientLock and squashes the following commits:
ca89fe8 [Michael Armbrust] Lock hive client when creating tables
SQL example code for Python, as shown on [SQL Programming Guide](https://spark.apache.org/docs/1.0.2/sql-programming-guide.html)
Author: jyotiska <jyotiska123@gmail.com>
Closes#2521 from jyotiska/sql_example and squashes the following commits:
1471dcb [jyotiska] added imports for sql
b25e436 [jyotiska] pep 8 compliance
43fd10a [jyotiska] lines broken to maintain 80 char limit
b4fdf4e [jyotiska] removed blank lines
83d5ab7 [jyotiska] added inferschema and applyschema to the demo
306667e [jyotiska] replaced blank line with end line
c90502a [jyotiska] fixed new line
4939a70 [jyotiska] added new line at end for python style
0b46148 [jyotiska] fixed appname for python sql example
8f67b5b [jyotiska] added python sql example
topicpMap to topicMap
Author: Gaspar Munoz <munozs.88@gmail.com>
Closes#2614 from gasparms/patch-1 and squashes the following commits:
00aab2c [Gaspar Munoz] Typo error in KafkaWordCount example
MD5 of query strings in `createQueryTest` calls are used to generate golden files, leaving trailing spaces there can be really dangerous. Got bitten by this while working on #2616: my "smart" IDE automatically removed a trailing space and makes Jenkins fail.
(Really should add "no trailing space" to our coding style guidelines!)
Author: Cheng Lian <lian.cs.zju@gmail.com>
Closes#2619 from liancheng/kill-trailing-space and squashes the following commits:
034f119 [Cheng Lian] Kill dangerous trailing space in query string
Non-root user use port 1- 1024 to start jetty server will get the exception " java.net.SocketException: Permission denied", so not use these ports
Author: scwf <wangfei1@huawei.com>
Closes#2610 from scwf/1-1024 and squashes the following commits:
cb8cc76 [scwf] do not use port 1 - 1024
Call SparkContext.stop() in all examples (and touch up minor nearby code style issues while at it)
Author: Sean Owen <sowen@cloudera.com>
Closes#2575 from srowen/SPARK-2626 and squashes the following commits:
5b2baae [Sean Owen] Call SparkContext.stop() in all examples (and touch up minor nearby code style issues while at it)
1. broadcast is triggle unexpected
2. fd is leaked in JVM (also leak in parallelize())
3. broadcast is not unpersisted in JVM after RDD is not be used any more.
cc JoshRosen , sorry for these stupid bugs.
Author: Davies Liu <davies.liu@gmail.com>
Closes#2603 from davies/fix_broadcast and squashes the following commits:
080a743 [Davies Liu] fix bugs in broadcast large closure of RDD
Added directory to be deleted into maven-clean-plugin in pom.xml.
Author: Masayoshi TSUZUKI <tsudukim@oss.nttdata.co.jp>
Closes#2613 from tsudukim/feature/SPARK-3757 and squashes the following commits:
8804bfc [Masayoshi TSUZUKI] Modified indent.
67c7171 [Masayoshi TSUZUKI] [SPARK-3757] mvn clean doesn't delete some files
Thread names are useful for correlating failures.
Author: Reynold Xin <rxin@apache.org>
Closes#2600 from rxin/log4j and squashes the following commits:
83ffe88 [Reynold Xin] [SPARK-3748] Log thread name in unit test logs
DecisionTreeRunner functionality additions:
* Allow user to pass in a test dataset
* Do not print full model if the model is too large.
As part of this, modify DecisionTreeModel and RandomForestModel to allow printing less info. Proposed updates:
* toString: prints model summary
* toDebugString: prints full model (named after RDD.toDebugString)
Similar update to Python API:
* __repr__() now prints a model summary
* toDebugString() now prints the full model
CC: mengxr chouqin manishamde codedeft Small update (whomever can take a look). Thanks!
Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
Closes#2604 from jkbradley/dtrunner-update and squashes the following commits:
b2b3c60 [Joseph K. Bradley] re-added python sql doc test, temporarily removed before
07b1fae [Joseph K. Bradley] repr() now prints a model summary toDebugString() now prints the full model
1d0d93d [Joseph K. Bradley] Updated DT and RF to print less when toString is called. Added toDebugString for verbose printing.
22eac8c [Joseph K. Bradley] Merge remote-tracking branch 'upstream/master' into dtrunner-update
e007a95 [Joseph K. Bradley] Updated DecisionTreeRunner to accept a test dataset.
Author: Reynold Xin <rxin@apache.org>
Closes#2599 from rxin/SPARK-3747 and squashes the following commits:
a74c04d [Reynold Xin] Added a line of comment explaining NonFatal
0e8d44c [Reynold Xin] [SPARK-3747] TaskResultGetter could incorrectly abort a stage if it cannot get result for a specific task
1. doc updates
2. simple checks on vector dimensions
3. use column major for matrices
davies jkbradley
Author: Xiangrui Meng <meng@databricks.com>
Closes#2548 from mengxr/mllib-py-clean and squashes the following commits:
6dce2df [Xiangrui Meng] address comments
116b5db [Xiangrui Meng] use np.dot instead of array.dot
75f2fcc [Xiangrui Meng] fix python style
fefce00 [Xiangrui Meng] better check of vector size with more tests
067ef71 [Xiangrui Meng] majored -> major
ef853f9 [Xiangrui Meng] update python linalg api and small fixes
Author: Reynold Xin <rxin@apache.org>
Closes#2602 from rxin/warning and squashes the following commits:
130186b [Reynold Xin] Remove compiler warning from TaskContext change.
Since it looked quite easy, I took the liberty of making a quick PR that just uses `Utils.startServiceOnPort` to fix this. It works locally for me.
Author: Sean Owen <sowen@cloudera.com>
Closes#2601 from srowen/SPARK-3744 and squashes the following commits:
ddc9319 [Sean Owen] Avoid port contention in tests by retrying several ports for Flume stream
[By request](https://github.com/apache/spark/pull/2588#issuecomment-57266871), and because it also makes sense.
Author: Nicholas Chammas <nicholas.chammas@gmail.com>
Closes#2597 from nchammas/timeout-commit-hash and squashes the following commits:
3d90714 [Nicholas Chammas] Revert "testing: making timeout 1 minute"
2353c95 [Nicholas Chammas] testing: making timeout 1 minute
e3a477e [Nicholas Chammas] post commit hash with timeout
for details, see: https://issues.apache.org/jira/browse/SPARK-3745
Author: shane knapp <incomplete@gmail.com>
Closes#2596 from shaneknapp/SPARK-3745 and squashes the following commits:
c95eea9 [shane knapp] SPARK-3745 - fix check-license to properly download and check jar
As suggested by mateiz , and because it came up on the mailing list again last week, this attempts to document that ordering of elements is not guaranteed across RDD evaluations in groupBy, zip, and partition-wise RDD methods. Suggestions welcome about the wording, or other methods that need a note.
Author: Sean Owen <sowen@cloudera.com>
Closes#2508 from srowen/SPARK-3356 and squashes the following commits:
b7c96fd [Sean Owen] Undo change to programming guide
ad4aeec [Sean Owen] Don't mention ordering in partition-wise methods, reword description of ordering for zip methods per review, and add similar note to programming guide, which mentions groupByKey (but not zip methods)
fce943b [Sean Owen] Note that ordering of elements is not guaranteed across RDD evaluations in groupBy, zip, and partition-wise RDD methods
When using spark-submit in `cluster` mode to submit a job to a Spark Standalone
cluster, if the JAVA_HOME environment variable was set on the submitting
machine then DriverRunner would attempt to use the submitter's JAVA_HOME to
launch the driver process (instead of the worker's JAVA_HOME), causing the
driver to fail unless the submitter and worker had the same Java location.
This commit fixes this by reading JAVA_HOME from sys.env instead of
command.environment.
Author: Josh Rosen <joshrosen@apache.org>
Closes#2586 from JoshRosen/SPARK-3734 and squashes the following commits:
e9513d9 [Josh Rosen] [SPARK-3734] DriverRunner should not read SPARK_HOME from submitter's environment.
The problem was that the 2nd argument in RemoveBroadcast is not tellMaster! It is "removeFromDriver". Basically when removeFromDriver is not true, we don't report broadcast block removal back to the driver, and then other executors mistakenly think that the executor would still have the block, and try to fetch from it.
cc @tdas
Author: Reynold Xin <rxin@apache.org>
Closes#2588 from rxin/debug and squashes the following commits:
6dab2e3 [Reynold Xin] Don't log random messages.
f430686 [Reynold Xin] Always report broadcast removal back to master.
2a13f70 [Reynold Xin] iii
This changes the way we send MapStatus from executors back to driver for large stages (>2000 tasks). For large stages, we no longer send one byte per block. Instead, we just send the average block size.
This makes large jobs (tens of thousands of tasks) much more reliable since the driver no longer sends huge amount of data.
Author: Reynold Xin <rxin@apache.org>
Closes#2470 from rxin/mapstatus and squashes the following commits:
822ff54 [Reynold Xin] Code review feedback.
3b86f56 [Reynold Xin] Added MimaExclude.
f89d182 [Reynold Xin] Fixed a bug in MapStatus
6a0401c [Reynold Xin] [SPARK-3613] Record only average block size in MapStatus for large stages.
Author: Reynold Xin <rxin@apache.org>
Closes#2581 from rxin/minor-cleanup and squashes the following commits:
736a91b [Reynold Xin] Minor cleanup of code.
Author: oded <oded@HP-DV6.c4internal.c4-security.com>
Closes#2486 from odedz/master and squashes the following commits:
dd7890a [oded] Fixed the condition in StronglyConnectedComponents Issue: SPARK-3635
When `numVertices > 50`, probability is set to 0. This would cause infinite loop.
Author: yingjieMiao <yingjie@42go.com>
Closes#2553 from yingjieMiao/graphx and squashes the following commits:
6adf3c8 [yingjieMiao] [graphX] GraphOps: random pick vertex bug