[MINOR] Issue: Change "slice" vs "partition" in exception messages (and code?)

## What changes were proposed in this pull request?

Came across the term "slice" when running some spark scala code. Consequently, a Google search indicated that "slices" and "partitions" refer to the same things; indeed see:

- [This issue](https://issues.apache.org/jira/browse/SPARK-1701)
- [This pull request](https://github.com/apache/spark/pull/2305)
- [This StackOverflow answer](http://stackoverflow.com/questions/23436640/what-is-the-difference-between-an-rdd-partition-and-a-slice) and [this one](http://stackoverflow.com/questions/24269495/what-are-the-differences-between-slices-and-partitions-of-rdds)

Thus this pull request fixes the occurrence of slice I came accross. Nonetheless, [it would appear](https://github.com/apache/spark/search?utf8=%E2%9C%93&q=slice&type=) there are still many references to "slice/slices" - thus I thought I'd raise this Pull Request to address the issue (sorry if this is the wrong place, I'm not too familar with raising apache issues).

## How was this patch tested?

(Not tested locally - only a minor exception message change.)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: asmith26 <asmith26@users.noreply.github.com>

Closes #17565 from asmith26/master.
This commit is contained in:
asmith26 2017-04-09 07:47:23 +01:00 committed by Sean Owen
parent e1afc4dcca
commit 34fc48fb59
7 changed files with 7 additions and 7 deletions

View file

@ -116,7 +116,7 @@ private object ParallelCollectionRDD {
*/
def slice[T: ClassTag](seq: Seq[T], numSlices: Int): Seq[Seq[T]] = {
if (numSlices < 1) {
throw new IllegalArgumentException("Positive number of slices required")
throw new IllegalArgumentException("Positive number of partitions required")
}
// Sequences need to be sliced at the same set of index positions for operations
// like RDD.zip() to behave as expected

View file

@ -26,7 +26,7 @@ import java.util.List;
/**
* Computes an approximation to pi
* Usage: JavaSparkPi [slices]
* Usage: JavaSparkPi [partitions]
*/
public final class JavaSparkPi {

View file

@ -32,7 +32,7 @@ import org.apache.spark.sql.SparkSession;
/**
* Transitive closure on a graph, implemented in Java.
* Usage: JavaTC [slices]
* Usage: JavaTC [partitions]
*/
public final class JavaTC {

View file

@ -21,7 +21,7 @@ package org.apache.spark.examples
import org.apache.spark.sql.SparkSession
/**
* Usage: BroadcastTest [slices] [numElem] [blockSize]
* Usage: BroadcastTest [partitions] [numElem] [blockSize]
*/
object BroadcastTest {
def main(args: Array[String]) {

View file

@ -23,7 +23,7 @@ import org.apache.spark.sql.SparkSession
/**
* Usage: MultiBroadcastTest [slices] [numElem]
* Usage: MultiBroadcastTest [partitions] [numElem]
*/
object MultiBroadcastTest {
def main(args: Array[String]) {

View file

@ -100,7 +100,7 @@ object SparkALS {
ITERATIONS = iters.getOrElse("5").toInt
slices = slices_.getOrElse("2").toInt
case _ =>
System.err.println("Usage: SparkALS [M] [U] [F] [iters] [slices]")
System.err.println("Usage: SparkALS [M] [U] [F] [iters] [partitions]")
System.exit(1)
}

View file

@ -28,7 +28,7 @@ import org.apache.spark.sql.SparkSession
/**
* Logistic regression based classification.
* Usage: SparkLR [slices]
* Usage: SparkLR [partitions]
*
* This is an example implementation for learning how to use Spark. For more conventional use,
* please refer to org.apache.spark.ml.classification.LogisticRegression.