2013-08-31 17:21:10 -04:00
---
layout: global
title: Machine Learning Library (MLlib)
---
2014-04-22 14:20:47 -04:00
MLlib is a Spark implementation of some common machine learning algorithms and utilities,
including classification, regression, clustering, collaborative
filtering, dimensionality reduction, as well as underlying optimization primitives:
2014-01-03 19:38:33 -05:00
2014-04-22 14:20:47 -04:00
* [Basics ](mllib-basics.html )
* data types
* summary statistics
* Classification and regression
* [linear support vector machine (SVM) ](mllib-linear-methods.html#linear-support-vector-machine-svm )
* [logistic regression ](mllib-linear-methods.html#logistic-regression )
* [linear least squares, Lasso, and ridge regression ](mllib-linear-methods.html#linear-least-squares-lasso-and-ridge-regression )
* [decision tree ](mllib-decision-tree.html )
* [naive Bayes ](mllib-naive-bayes.html )
* [Collaborative filtering ](mllib-collaborative-filtering.html )
* alternating least squares (ALS)
* [Clustering ](mllib-clustering.html )
* k-means
* [Dimensionality reduction ](mllib-dimensionality-reduction.html )
* singular value decomposition (SVD)
* principal component analysis (PCA)
* [Optimization ](mllib-optimization.html )
* stochastic gradient descent
* limited-memory BFGS (L-BFGS)
2014-05-18 20:00:57 -04:00
MLlib is a new component under active development.
The APIs marked `Experimental` /`DeveloperApi` may change in future releases,
and we will provide migration guide between releases.
2014-04-22 14:20:47 -04:00
## Dependencies
MLlib uses linear algebra packages [Breeze ](http://www.scalanlp.org/ ), which depends on
[netlib-java ](https://github.com/fommil/netlib-java ), and
[jblas ](https://github.com/mikiobraun/jblas ).
`netlib-java` and `jblas` depend on native Fortran routines.
You need to install the
[gfortran runtime library ](https://github.com/mikiobraun/jblas/wiki/Missing-Libraries ) if it is not
already present on your nodes. MLlib will throw a linking error if it cannot detect these libraries
automatically. Due to license issues, we do not include `netlib-java` 's native libraries in MLlib's
dependency set. If no native library is available at runtime, you will see a warning message. To
use native libraries from `netlib-java` , please include artifact
`com.github.fommil.netlib:all:1.1.2` as a dependency of your project or build your own (see
[instructions ](https://github.com/fommil/netlib-java/blob/master/README.md#machine-optimised-system-libraries )).
2013-09-10 00:45:04 -04:00
2014-04-15 03:19:43 -04:00
To use MLlib in Python, you will need [NumPy ](http://www.numpy.org ) version 1.4 or newer.
2014-04-22 14:20:47 -04:00
---
## Migration guide
### From 0.9 to 1.0
In MLlib v1.0, we support both dense and sparse input in a unified way, which introduces a few
breaking changes. If your data is sparse, please store it in a sparse format instead of dense to
take advantage of sparsity in both storage and computation.
< div class = "codetabs" >
< div data-lang = "scala" markdown = "1" >
We used to represent a feature vector by `Array[Double]` , which is replaced by
2014-05-18 20:00:57 -04:00
[`Vector` ](api/scala/index.html#org.apache.spark.mllib.linalg.Vector ) in v1.0. Algorithms that used
2014-04-22 14:20:47 -04:00
to accept `RDD[Array[Double]]` now take
2014-05-18 20:00:57 -04:00
`RDD[Vector]` . [`LabeledPoint` ](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint )
2014-04-22 14:20:47 -04:00
is now a wrapper of `(Double, Vector)` instead of `(Double, Array[Double])` . Converting
`Array[Double]` to `Vector` is straightforward:
{% highlight scala %}
import org.apache.spark.mllib.linalg.{Vector, Vectors}
val array: Array[Double] = ... // a double array
val vector: Vector = Vectors.dense(array) // a dense vector
{% endhighlight %}
2014-05-18 20:00:57 -04:00
[`Vectors` ](api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$ ) provides factory methods to create sparse vectors.
2014-04-22 14:20:47 -04:00
*Note*. Scala imports `scala.collection.immutable.Vector` by default, so you have to import `org.apache.spark.mllib.linalg.Vector` explicitly to use MLlib's `Vector` .
< / div >
< div data-lang = "java" markdown = "1" >
We used to represent a feature vector by `double[]` , which is replaced by
2014-05-18 20:00:57 -04:00
[`Vector` ](api/scala/index.html#org.apache.spark.mllib.linalg.Vector ) in v1.0. Algorithms that used
2014-04-22 14:20:47 -04:00
to accept `RDD<double[]>` now take
2014-05-18 20:00:57 -04:00
`RDD<Vector>` . [`LabeledPoint` ](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint )
2014-04-22 14:20:47 -04:00
is now a wrapper of `(double, Vector)` instead of `(double, double[])` . Converting `double[]` to
`Vector` is straightforward:
{% highlight java %}
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
double[] array = ... // a double array
2014-05-06 23:07:22 -04:00
Vector vector = Vectors.dense(array); // a dense vector
2014-04-22 14:20:47 -04:00
{% endhighlight %}
2014-05-18 20:00:57 -04:00
[`Vectors` ](api/scala/index.html#org.apache.spark.mllib.linalg.Vectors$ ) provides factory methods to
2014-04-22 14:20:47 -04:00
create sparse vectors.
< / div >
< div data-lang = "python" markdown = "1" >
We used to represent a labeled feature vector in a NumPy array, where the first entry corresponds to
the label and the rest are features. This representation is replaced by class
2014-05-18 20:00:57 -04:00
[`LabeledPoint` ](api/python/pyspark.mllib.regression.LabeledPoint-class.html ), which takes both
2014-04-22 14:20:47 -04:00
dense and sparse feature vectors.
{% highlight python %}
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
# Create a labeled point with a positive label and a dense feature vector.
pos = LabeledPoint(1.0, [1.0, 0.0, 3.0])
# Create a labeled point with a negative label and a sparse feature vector.
neg = LabeledPoint(0.0, SparseVector(3, [0, 2], [1.0, 3.0]))
{% endhighlight %}
< / div >
< / div >