spark-instrumented-optimizer/docs/mllib-linear-algebra.md
Martin Jaggi fabf174999 Merge pull request #552 from martinjaggi/master. Closes #552.
tex formulas in the documentation

using mathjax.
and spliting the MLlib documentation by techniques

see jira
https://spark-project.atlassian.net/browse/MLLIB-19
and
https://github.com/shivaram/spark/compare/mathjax

Author: Martin Jaggi <m.jaggi@gmail.com>

== Merge branch commits ==

commit 0364bfabbfc347f917216057a20c39b631842481
Author: Martin Jaggi <m.jaggi@gmail.com>
Date:   Fri Feb 7 03:19:38 2014 +0100

    minor polishing, as suggested by @pwendell

commit dcd2142c164b2f602bf472bb152ad55bae82d31a
Author: Martin Jaggi <m.jaggi@gmail.com>
Date:   Thu Feb 6 18:04:26 2014 +0100

    enabling inline latex formulas with $.$

    same mathjax configuration as used in math.stackexchange.com

    sample usage in the linear algebra (SVD) documentation

commit bbafafd2b497a5acaa03a140bb9de1fbb7d67ffa
Author: Martin Jaggi <m.jaggi@gmail.com>
Date:   Thu Feb 6 17:31:29 2014 +0100

    split MLlib documentation by techniques

    and linked from the main mllib-guide.md site

commit d1c5212b93c67436543c2d8ddbbf610fdf0a26eb
Author: Martin Jaggi <m.jaggi@gmail.com>
Date:   Thu Feb 6 16:59:43 2014 +0100

    enable mathjax formula in the .md documentation files

    code by @shivaram

commit d73948db0d9bc36296054e79fec5b1a657b4eab4
Author: Martin Jaggi <m.jaggi@gmail.com>
Date:   Thu Feb 6 16:57:23 2014 +0100

    minor update on how to compile the documentation
2014-02-08 11:39:13 -08:00

1.8 KiB

layout title
global MLlib - Linear Algebra
  • Table of contents {:toc}

Singular Value Decomposition

Singular Value Decomposition for Tall and Skinny matrices. Given an $m \times n$ matrix $A$, we can compute matrices $U,S,V$ such that

\[ A = U \cdot S \cdot V^T \]

There is no restriction on m, but we require n^2 doubles to fit in memory locally on one machine. Further, n should be less than m.

The decomposition is computed by first computing $A^TA = V S^2 V^T$, computing SVD locally on that (since $n \times n$ is small), from which we recover $S$ and $V$. Then we compute U via easy matrix multiplication as $U = A \cdot V \cdot S^{-1}$.

Only singular vectors associated with largest k singular values are recovered. If there are k such values, then the dimensions of the return will be:

  • $S$ is $k \times k$ and diagonal, holding the singular values on diagonal.
  • $U$ is $m \times k$ and satisfies $U^T U = \mathop{eye}(k)$.
  • $V$ is $n \times k$ and satisfies $V^T V = \mathop{eye}(k)$.

All input and output is expected in sparse matrix format, 0-indexed as tuples of the form ((i,j),value) all in SparseMatrix RDDs. Below is example usage.

{% highlight scala %}

import org.apache.spark.SparkContext import org.apache.spark.mllib.linalg.SVD import org.apache.spark.mllib.linalg.SparseMatrix import org.apache.spark.mllib.linalg.MatrixEntry

// Load and parse the data file val data = sc.textFile("mllib/data/als/test.data").map { line => val parts = line.split(',') MatrixEntry(parts(0).toInt, parts(1).toInt, parts(2).toDouble) } val m = 4 val n = 4 val k = 1

// recover largest singular vector val decomposed = SVD.sparseSVD(SparseMatrix(data, m, n), k) val = decomposed.S.data

println("singular values = " + s.toArray.mkString) {% endhighlight %}