2014-02-08 14:39:13 -05:00
|
|
|
---
|
|
|
|
layout: global
|
2014-05-18 20:00:57 -04:00
|
|
|
title: Clustering - MLlib
|
|
|
|
displayTitle: <a href="mllib-guide.html">MLlib</a> - Clustering
|
2014-02-08 14:39:13 -05:00
|
|
|
---
|
|
|
|
|
|
|
|
* Table of contents
|
|
|
|
{:toc}
|
|
|
|
|
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
## Clustering
|
2014-02-08 14:39:13 -05:00
|
|
|
|
|
|
|
Clustering is an unsupervised learning problem whereby we aim to group subsets
|
|
|
|
of entities with one another based on some notion of similarity. Clustering is
|
|
|
|
often used for exploratory analysis and/or as a component of a hierarchical
|
|
|
|
supervised learning pipeline (in which distinct classifiers or regression
|
2014-04-22 14:20:47 -04:00
|
|
|
models are trained for each cluster).
|
|
|
|
|
|
|
|
MLlib supports
|
2014-02-08 14:39:13 -05:00
|
|
|
[k-means](http://en.wikipedia.org/wiki/K-means_clustering) clustering, one of
|
|
|
|
the most commonly used clustering algorithms that clusters the data points into
|
2014-05-06 23:07:22 -04:00
|
|
|
predefined number of clusters. The MLlib implementation includes a parallelized
|
2014-02-08 14:39:13 -05:00
|
|
|
variant of the [k-means++](http://en.wikipedia.org/wiki/K-means%2B%2B) method
|
|
|
|
called [kmeans||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).
|
|
|
|
The implementation in MLlib has the following parameters:
|
|
|
|
|
|
|
|
* *k* is the number of desired clusters.
|
|
|
|
* *maxIterations* is the maximum number of iterations to run.
|
|
|
|
* *initializationMode* specifies either random initialization or
|
|
|
|
initialization via k-means\|\|.
|
|
|
|
* *runs* is the number of times to run the k-means algorithm (k-means is not
|
|
|
|
guaranteed to find a globally optimal solution, and when run multiple times on
|
|
|
|
a given dataset, the algorithm returns the best clustering result).
|
2014-05-06 23:07:22 -04:00
|
|
|
* *initializationSteps* determines the number of steps in the k-means\|\| algorithm.
|
2014-02-08 14:39:13 -05:00
|
|
|
* *epsilon* determines the distance threshold within which we consider k-means to have converged.
|
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
## Examples
|
2014-02-08 14:39:13 -05:00
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
<div class="codetabs">
|
|
|
|
<div data-lang="scala" markdown="1">
|
2014-02-08 14:39:13 -05:00
|
|
|
Following code snippets can be executed in `spark-shell`.
|
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
In the following example after loading and parsing data, we use the
|
2014-05-18 20:00:57 -04:00
|
|
|
[`KMeans`](api/scala/index.html#org.apache.spark.mllib.clustering.KMeans) object to cluster the data
|
2014-02-08 14:39:13 -05:00
|
|
|
into two clusters. The number of desired clusters is passed to the algorithm. We then compute Within
|
|
|
|
Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing *k*. In fact the
|
|
|
|
optimal *k* is usually one where there is an "elbow" in the WSSSE graph.
|
|
|
|
|
|
|
|
{% highlight scala %}
|
|
|
|
import org.apache.spark.mllib.clustering.KMeans
|
[WIP] SPARK-1430: Support sparse data in Python MLlib
This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.
On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.
Some to-do items left:
- [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
- [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
- [x] Explain how to use these in the Python MLlib docs.
CC @mengxr, @joshrosen
Author: Matei Zaharia <matei@databricks.com>
Closes #341 from mateiz/py-ml-update and squashes the following commits:
d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
b9f97a3 [Matei Zaharia] Fix test
1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
154f45d [Matei Zaharia] Update docs, name some magic values
881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
2014-04-15 23:33:24 -04:00
|
|
|
import org.apache.spark.mllib.linalg.Vectors
|
2014-02-08 14:39:13 -05:00
|
|
|
|
|
|
|
// Load and parse the data
|
[WIP] SPARK-1430: Support sparse data in Python MLlib
This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.
On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.
Some to-do items left:
- [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
- [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
- [x] Explain how to use these in the Python MLlib docs.
CC @mengxr, @joshrosen
Author: Matei Zaharia <matei@databricks.com>
Closes #341 from mateiz/py-ml-update and squashes the following commits:
d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
b9f97a3 [Matei Zaharia] Fix test
1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
154f45d [Matei Zaharia] Update docs, name some magic values
881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
2014-04-15 23:33:24 -04:00
|
|
|
val data = sc.textFile("data/kmeans_data.txt")
|
|
|
|
val parsedData = data.map(s => Vectors.dense(s.split(' ').map(_.toDouble)))
|
2014-02-08 14:39:13 -05:00
|
|
|
|
|
|
|
// Cluster the data into two classes using KMeans
|
|
|
|
val numClusters = 2
|
[WIP] SPARK-1430: Support sparse data in Python MLlib
This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.
On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.
Some to-do items left:
- [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
- [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
- [x] Explain how to use these in the Python MLlib docs.
CC @mengxr, @joshrosen
Author: Matei Zaharia <matei@databricks.com>
Closes #341 from mateiz/py-ml-update and squashes the following commits:
d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
b9f97a3 [Matei Zaharia] Fix test
1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
154f45d [Matei Zaharia] Update docs, name some magic values
881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
2014-04-15 23:33:24 -04:00
|
|
|
val numIterations = 20
|
2014-02-08 14:39:13 -05:00
|
|
|
val clusters = KMeans.train(parsedData, numClusters, numIterations)
|
|
|
|
|
|
|
|
// Evaluate clustering by computing Within Set Sum of Squared Errors
|
|
|
|
val WSSSE = clusters.computeCost(parsedData)
|
|
|
|
println("Within Set Sum of Squared Errors = " + WSSSE)
|
|
|
|
{% endhighlight %}
|
2014-04-22 14:20:47 -04:00
|
|
|
</div>
|
2014-02-08 14:39:13 -05:00
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
<div data-lang="java" markdown="1">
|
2014-02-08 14:39:13 -05:00
|
|
|
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
|
|
|
|
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
|
|
|
|
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
|
|
|
|
calling `.rdd()` on your `JavaRDD` object.
|
2014-04-22 14:20:47 -04:00
|
|
|
</div>
|
2014-02-08 14:39:13 -05:00
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
<div data-lang="python" markdown="1">
|
2014-02-08 14:39:13 -05:00
|
|
|
Following examples can be tested in the PySpark shell.
|
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
In the following example after loading and parsing data, we use the KMeans object to cluster the
|
|
|
|
data into two clusters. The number of desired clusters is passed to the algorithm. We then compute
|
|
|
|
Within Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing *k*. In
|
|
|
|
fact the optimal *k* is usually one where there is an "elbow" in the WSSSE graph.
|
2014-02-08 14:39:13 -05:00
|
|
|
|
|
|
|
{% highlight python %}
|
|
|
|
from pyspark.mllib.clustering import KMeans
|
|
|
|
from numpy import array
|
|
|
|
from math import sqrt
|
|
|
|
|
|
|
|
# Load and parse the data
|
[WIP] SPARK-1430: Support sparse data in Python MLlib
This PR adds a SparseVector class in PySpark and updates all the regression, classification and clustering algorithms and models to support sparse data, similar to MLlib. I chose to add this class because SciPy is quite difficult to install in many environments (more so than NumPy), but I plan to add support for SciPy sparse vectors later too, and make the methods work transparently on objects of either type.
On the Scala side, we keep Python sparse vectors sparse and pass them to MLlib. We always return dense vectors from our models.
Some to-do items left:
- [x] Support SciPy's scipy.sparse matrix objects when SciPy is available. We can easily add a function to convert these to our own SparseVector.
- [x] MLlib currently uses a vector with one extra column on the left to represent what we call LabeledPoint in Scala. Do we really want this? It may get annoying once you deal with sparse data since you must add/subtract 1 to each feature index when training. We can remove this API in 1.0 and use tuples for labeling.
- [x] Explain how to use these in the Python MLlib docs.
CC @mengxr, @joshrosen
Author: Matei Zaharia <matei@databricks.com>
Closes #341 from mateiz/py-ml-update and squashes the following commits:
d52e763 [Matei Zaharia] Remove no-longer-needed slice code and handle review comments
ea5a25a [Matei Zaharia] Fix remaining uses of copyto() after merge
b9f97a3 [Matei Zaharia] Fix test
1e1bd0f [Matei Zaharia] Add MLlib logistic regression example in Python
88bc01f [Matei Zaharia] Clean up inheritance of LinearModel in Python, and expose its parametrs
37ab747 [Matei Zaharia] Fix some examples and docs due to changes in MLlib API
da0f27e [Matei Zaharia] Added a MLlib K-means example and updated docs to discuss sparse data
c48e85a [Matei Zaharia] Added some tests for passing lists as input, and added mllib/tests.py to run-tests script.
a07ba10 [Matei Zaharia] Fix some typos and calculation of initial weights
74eefe7 [Matei Zaharia] Added LabeledPoint class in Python
889dde8 [Matei Zaharia] Support scipy.sparse matrices in all our algorithms and models
ab244d1 [Matei Zaharia] Allow SparseVectors to be initialized using a dict
a5d6426 [Matei Zaharia] Add linalg.py to run-tests script
0e7a3d8 [Matei Zaharia] Keep vectors sparse in Java when reading LabeledPoints
eaee759 [Matei Zaharia] Update regression, classification and clustering models for sparse data
2abbb44 [Matei Zaharia] Further work to get linear models working with sparse data
154f45d [Matei Zaharia] Update docs, name some magic values
881fef7 [Matei Zaharia] Added a sparse vector in Python and made Java-Python format more compact
2014-04-15 23:33:24 -04:00
|
|
|
data = sc.textFile("data/kmeans_data.txt")
|
2014-02-08 14:39:13 -05:00
|
|
|
parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))
|
|
|
|
|
|
|
|
# Build the model (cluster the data)
|
|
|
|
clusters = KMeans.train(parsedData, 2, maxIterations=10,
|
2014-04-22 14:20:47 -04:00
|
|
|
runs=10, initializationMode="random")
|
2014-02-08 14:39:13 -05:00
|
|
|
|
|
|
|
# Evaluate clustering by computing Within Set Sum of Squared Errors
|
|
|
|
def error(point):
|
|
|
|
center = clusters.centers[clusters.predict(point)]
|
|
|
|
return sqrt(sum([x**2 for x in (point - center)]))
|
|
|
|
|
|
|
|
WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
|
|
|
|
print("Within Set Sum of Squared Error = " + str(WSSSE))
|
|
|
|
{% endhighlight %}
|
2014-04-22 14:20:47 -04:00
|
|
|
</div>
|
2014-02-08 14:39:13 -05:00
|
|
|
|
2014-04-22 14:20:47 -04:00
|
|
|
</div>
|