SPARK-3359 [CORE] [DOCS] sbt/sbt unidoc doesn't work with Java 8

These are more `javadoc` 8-related changes I spotted while investigating. These should be helpful in any event, but this does not nearly resolve SPARK-3359, which may never be feasible while using `unidoc` and `javadoc` 8.

Author: Sean Owen <sowen@cloudera.com>

Closes #4193 from srowen/SPARK-3359 and squashes the following commits:

5b33f66 [Sean Owen] Additional scaladoc fixes for javadoc 8; still not going to be javadoc 8 compatible
This commit is contained in:
Sean Owen 2015-01-31 10:40:42 -08:00 committed by Xiangrui Meng
parent ef8974b1b7
commit c84d5a10e8
7 changed files with 20 additions and 20 deletions

View file

@ -604,8 +604,8 @@ abstract class RDD[T: ClassTag](
* print line function (like out.println()) as the 2nd parameter. * print line function (like out.println()) as the 2nd parameter.
* An example of pipe the RDD data of groupBy() in a streaming way, * An example of pipe the RDD data of groupBy() in a streaming way,
* instead of constructing a huge String to concat all the elements: * instead of constructing a huge String to concat all the elements:
* def printRDDElement(record:(String, Seq[String]), f:String=>Unit) = * def printRDDElement(record:(String, Seq[String]), f:String=&gt;Unit) =
* for (e <- record._2){f(e)} * for (e &lt;- record._2){f(e)}
* @param separateWorkingDir Use separate working directories for each task. * @param separateWorkingDir Use separate working directories for each task.
* @return the result RDD * @return the result RDD
*/ */
@ -841,7 +841,7 @@ abstract class RDD[T: ClassTag](
* Return an RDD with the elements from `this` that are not in `other`. * Return an RDD with the elements from `this` that are not in `other`.
* *
* Uses `this` partitioner/partition size, because even if `other` is huge, the resulting * Uses `this` partitioner/partition size, because even if `other` is huge, the resulting
* RDD will be <= us. * RDD will be &lt;= us.
*/ */
def subtract(other: RDD[T]): RDD[T] = def subtract(other: RDD[T]): RDD[T] =
subtract(other, partitioner.getOrElse(new HashPartitioner(partitions.size))) subtract(other, partitioner.getOrElse(new HashPartitioner(partitions.size)))
@ -1027,7 +1027,7 @@ abstract class RDD[T: ClassTag](
* *
* Note that this method should only be used if the resulting map is expected to be small, as * Note that this method should only be used if the resulting map is expected to be small, as
* the whole thing is loaded into the driver's memory. * the whole thing is loaded into the driver's memory.
* To handle very large results, consider using rdd.map(x => (x, 1L)).reduceByKey(_ + _), which * To handle very large results, consider using rdd.map(x =&gt; (x, 1L)).reduceByKey(_ + _), which
* returns an RDD[T, Long] instead of a map. * returns an RDD[T, Long] instead of a map.
*/ */
def countByValue()(implicit ord: Ordering[T] = null): Map[T, Long] = { def countByValue()(implicit ord: Ordering[T] = null): Map[T, Long] = {
@ -1065,7 +1065,7 @@ abstract class RDD[T: ClassTag](
* Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available * Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", available
* <a href="http://dx.doi.org/10.1145/2452376.2452456">here</a>. * <a href="http://dx.doi.org/10.1145/2452376.2452456">here</a>.
* *
* The relative accuracy is approximately `1.054 / sqrt(2^p)`. Setting a nonzero `sp > p` * The relative accuracy is approximately `1.054 / sqrt(2^p)`. Setting a nonzero `sp &gt; p`
* would trigger sparse representation of registers, which may reduce the memory consumption * would trigger sparse representation of registers, which may reduce the memory consumption
* and increase accuracy when the cardinality is small. * and increase accuracy when the cardinality is small.
* *
@ -1383,7 +1383,7 @@ abstract class RDD[T: ClassTag](
/** /**
* Private API for changing an RDD's ClassTag. * Private API for changing an RDD's ClassTag.
* Used for internal Java <-> Scala API compatibility. * Used for internal Java-Scala API compatibility.
*/ */
private[spark] def retag(cls: Class[T]): RDD[T] = { private[spark] def retag(cls: Class[T]): RDD[T] = {
val classTag: ClassTag[T] = ClassTag.apply(cls) val classTag: ClassTag[T] = ClassTag.apply(cls)
@ -1392,7 +1392,7 @@ abstract class RDD[T: ClassTag](
/** /**
* Private API for changing an RDD's ClassTag. * Private API for changing an RDD's ClassTag.
* Used for internal Java <-> Scala API compatibility. * Used for internal Java-Scala API compatibility.
*/ */
private[spark] def retag(implicit classTag: ClassTag[T]): RDD[T] = { private[spark] def retag(implicit classTag: ClassTag[T]): RDD[T] = {
this.mapPartitions(identity, preservesPartitioning = true)(classTag) this.mapPartitions(identity, preservesPartitioning = true)(classTag)

View file

@ -55,7 +55,7 @@ abstract class Graph[VD: ClassTag, ED: ClassTag] protected () extends Serializab
* @return an RDD containing the edges in this graph * @return an RDD containing the edges in this graph
* *
* @see [[Edge]] for the edge type. * @see [[Edge]] for the edge type.
* @see [[triplets]] to get an RDD which contains all the edges * @see [[Graph#triplets]] to get an RDD which contains all the edges
* along with their vertex data. * along with their vertex data.
* *
*/ */

View file

@ -58,11 +58,11 @@ abstract class PipelineStage extends Serializable with Logging {
/** /**
* :: AlphaComponent :: * :: AlphaComponent ::
* A simple pipeline, which acts as an estimator. A Pipeline consists of a sequence of stages, each * A simple pipeline, which acts as an estimator. A Pipeline consists of a sequence of stages, each
* of which is either an [[Estimator]] or a [[Transformer]]. When [[Pipeline.fit]] is called, the * of which is either an [[Estimator]] or a [[Transformer]]. When [[Pipeline#fit]] is called, the
* stages are executed in order. If a stage is an [[Estimator]], its [[Estimator.fit]] method will * stages are executed in order. If a stage is an [[Estimator]], its [[Estimator#fit]] method will
* be called on the input dataset to fit a model. Then the model, which is a transformer, will be * be called on the input dataset to fit a model. Then the model, which is a transformer, will be
* used to transform the dataset as the input to the next stage. If a stage is a [[Transformer]], * used to transform the dataset as the input to the next stage. If a stage is a [[Transformer]],
* its [[Transformer.transform]] method will be called to produce the dataset for the next stage. * its [[Transformer#transform]] method will be called to produce the dataset for the next stage.
* The fitted model from a [[Pipeline]] is an [[PipelineModel]], which consists of fitted models and * The fitted model from a [[Pipeline]] is an [[PipelineModel]], which consists of fitted models and
* transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as * transformers, corresponding to the pipeline stages. If there are no stages, the pipeline acts as
* an identity transformer. * an identity transformer.
@ -77,9 +77,9 @@ class Pipeline extends Estimator[PipelineModel] {
/** /**
* Fits the pipeline to the input dataset with additional parameters. If a stage is an * Fits the pipeline to the input dataset with additional parameters. If a stage is an
* [[Estimator]], its [[Estimator.fit]] method will be called on the input dataset to fit a model. * [[Estimator]], its [[Estimator#fit]] method will be called on the input dataset to fit a model.
* Then the model, which is a transformer, will be used to transform the dataset as the input to * Then the model, which is a transformer, will be used to transform the dataset as the input to
* the next stage. If a stage is a [[Transformer]], its [[Transformer.transform]] method will be * the next stage. If a stage is a [[Transformer]], its [[Transformer#transform]] method will be
* called to produce the dataset for the next stage. The fitted model from a [[Pipeline]] is an * called to produce the dataset for the next stage. The fitted model from a [[Pipeline]] is an
* [[PipelineModel]], which consists of fitted models and transformers, corresponding to the * [[PipelineModel]], which consists of fitted models and transformers, corresponding to the
* pipeline stages. If there are no stages, the output model acts as an identity transformer. * pipeline stages. If there are no stages, the output model acts as an identity transformer.

View file

@ -151,10 +151,10 @@ class RowMatrix(
* storing the right singular vectors, is computed via matrix multiplication as * storing the right singular vectors, is computed via matrix multiplication as
* U = A * (V * S^-1^), if requested by user. The actual method to use is determined * U = A * (V * S^-1^), if requested by user. The actual method to use is determined
* automatically based on the cost: * automatically based on the cost:
* - If n is small (n &lt; 100) or k is large compared with n (k > n / 2), we compute the Gramian * - If n is small (n &lt; 100) or k is large compared with n (k &gt; n / 2), we compute
* matrix first and then compute its top eigenvalues and eigenvectors locally on the driver. * the Gramian matrix first and then compute its top eigenvalues and eigenvectors locally
* This requires a single pass with O(n^2^) storage on each executor and on the driver, and * on the driver. This requires a single pass with O(n^2^) storage on each executor and
* O(n^2^ k) time on the driver. * on the driver, and O(n^2^ k) time on the driver.
* - Otherwise, we compute (A' * A) * v in a distributive way and send it to ARPACK's DSAUPD to * - Otherwise, we compute (A' * A) * v in a distributive way and send it to ARPACK's DSAUPD to
* compute (A' * A)'s top eigenvalues and eigenvectors on the driver node. This requires O(k) * compute (A' * A)'s top eigenvalues and eigenvectors on the driver node. This requires O(k)
* passes, O(n) storage on each executor, and O(n k) storage on the driver. * passes, O(n) storage on each executor, and O(n k) storage on the driver.

View file

@ -183,7 +183,7 @@ private[tree] object DecisionTreeMetadata extends Logging {
} }
/** /**
* Version of [[buildMetadata()]] for DecisionTree. * Version of [[DecisionTreeMetadata#buildMetadata]] for DecisionTree.
*/ */
def buildMetadata( def buildMetadata(
input: RDD[LabeledPoint], input: RDD[LabeledPoint],

View file

@ -45,7 +45,7 @@ trait Loss extends Serializable {
* purposes. * purposes.
* @param model Model of the weak learner. * @param model Model of the weak learner.
* @param data Training dataset: RDD of [[org.apache.spark.mllib.regression.LabeledPoint]]. * @param data Training dataset: RDD of [[org.apache.spark.mllib.regression.LabeledPoint]].
* @return * @return Measure of model error on data
*/ */
def computeError(model: TreeEnsembleModel, data: RDD[LabeledPoint]): Double def computeError(model: TreeEnsembleModel, data: RDD[LabeledPoint]): Double

View file

@ -62,7 +62,7 @@ object LinearDataGenerator {
* @param nPoints Number of points in sample. * @param nPoints Number of points in sample.
* @param seed Random seed * @param seed Random seed
* @param eps Epsilon scaling factor. * @param eps Epsilon scaling factor.
* @return * @return Seq of input.
*/ */
def generateLinearInput( def generateLinearInput(
intercept: Double, intercept: Double,