Though we don't use default argument for methods in RandomRDDs, it is still not easy for Java users to use because the output type is either `RDD[Double]` or `RDD[Vector]`. Java users should expect `JavaDoubleRDD` and `JavaRDD[Vector]`, respectively. We should create dedicated methods for Java users, and allow default arguments in Scala methods in RandomRDDs, to make life easier for both Java and Scala users. This PR also contains documentation for random data generation. brkyvz Author: Xiangrui Meng <meng@databricks.com> Closes #2041 from mengxr/stat-doc and squashes the following commits: fc5eedf [Xiangrui Meng] add missing comma ffde810 [Xiangrui Meng] address comments aef6d07 [Xiangrui Meng] add doc for random data generation b99d94b [Xiangrui Meng] add java-friendly methods to RandomRDDs
5.3 KiB
layout | title |
---|---|
global | Machine Learning Library (MLlib) |
MLlib is Spark's scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives, as outlined below:
- Data types
- Basic statistics
- random data generation
- stratified sampling
- summary statistics
- hypothesis testing
- Classification and regression
- Collaborative filtering
- alternating least squares (ALS)
- Clustering
- k-means
- Dimensionality reduction
- singular value decomposition (SVD)
- principal component analysis (PCA)
- Feature extraction and transformation
- Optimization (developer)
- stochastic gradient descent
- limited-memory BFGS (L-BFGS)
MLlib is under active development.
The APIs marked Experimental
/DeveloperApi
may change in future releases,
and the migration guide below will explain all changes between releases.
Dependencies
MLlib uses the linear algebra package Breeze, which depends on
netlib-java, and
jblas.
netlib-java
and jblas
depend on native Fortran routines.
You need to install the
gfortran runtime library if it is not
already present on your nodes. MLlib will throw a linking error if it cannot detect these libraries
automatically. Due to license issues, we do not include netlib-java
's native libraries in MLlib's
dependency set. If no native library is available at runtime, you will see a warning message. To
use native libraries from netlib-java
, please include artifact
com.github.fommil.netlib:all:1.1.2
as a dependency of your project or build your own (see
instructions).
To use MLlib in Python, you will need NumPy version 1.4 or newer.
Migration Guide
From 0.9 to 1.0
In MLlib v1.0, we support both dense and sparse input in a unified way, which introduces a few breaking changes. If your data is sparse, please store it in a sparse format instead of dense to take advantage of sparsity in both storage and computation. Details are described below.
We used to represent a feature vector by Array[Double]
, which is replaced by
Vector
in v1.0. Algorithms that used
to accept RDD[Array[Double]]
now take
RDD[Vector]
. LabeledPoint
is now a wrapper of (Double, Vector)
instead of (Double, Array[Double])
. Converting
Array[Double]
to Vector
is straightforward:
{% highlight scala %} import org.apache.spark.mllib.linalg.{Vector, Vectors}
val array: Array[Double] = ... // a double array val vector: Vector = Vectors.dense(array) // a dense vector {% endhighlight %}
Vectors
provides factory methods to create sparse vectors.
Note. Scala imports scala.collection.immutable.Vector
by default, so you have to import org.apache.spark.mllib.linalg.Vector
explicitly to use MLlib's Vector
.
We used to represent a feature vector by double[]
, which is replaced by
Vector
in v1.0. Algorithms that used
to accept RDD<double[]>
now take
RDD<Vector>
. LabeledPoint
is now a wrapper of (double, Vector)
instead of (double, double[])
. Converting double[]
to
Vector
is straightforward:
{% highlight java %} import org.apache.spark.mllib.linalg.Vector; import org.apache.spark.mllib.linalg.Vectors;
double[] array = ... // a double array Vector vector = Vectors.dense(array); // a dense vector {% endhighlight %}
Vectors
provides factory methods to
create sparse vectors.
We used to represent a labeled feature vector in a NumPy array, where the first entry corresponds to
the label and the rest are features. This representation is replaced by class
LabeledPoint
, which takes both
dense and sparse feature vectors.
{% highlight python %} from pyspark.mllib.linalg import SparseVector from pyspark.mllib.regression import LabeledPoint
Create a labeled point with a positive label and a dense feature vector.
pos = LabeledPoint(1.0, [1.0, 0.0, 3.0])
Create a labeled point with a negative label and a sparse feature vector.
neg = LabeledPoint(0.0, SparseVector(3, [0, 2], [1.0, 3.0])) {% endhighlight %}