Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Sean Owen 9e63f80e75 MLLIB-22. Support negative implicit input in ALS
I'm back with another less trivial suggestion for ALS:

In ALS for implicit feedback, input values are treated as weights on squared-errors in a loss function (or rather, the weight is a simple function of the input r, like c = 1 + alpha*r). The paper on which it's based assumes that the input is positive. Indeed, if the input is negative, it will create a negative weight on squared-errors, which causes things to go haywire. The optimization will try to make the error in a cell as large possible, and the result is silently bogus.

There is a good use case for negative input values though. Implicit feedback is usually collected from signals of positive interaction like a view or like or buy, but equally, can come from "not interested" signals. The natural representation is negative values.

The algorithm can be extended quite simply to provide a sound interpretation of these values: negative values should encourage the factorization to come up with 0 for cells with large negative input values, just as much as positive values encourage it to come up with 1.

The implications for the algorithm are simple:
* the confidence function value must not be negative, and so can become 1 + alpha*|r|
* the matrix P should have a value 1 where the input R is _positive_, not merely where it is non-zero. Actually, that's what the paper already says, it's just that we can't assume P = 1 when a cell in R is specified anymore, since it may be negative

This in turn entails just a few lines of code change in `ALS.scala`:
* `rs(i)` becomes `abs(rs(i))`
* When constructing `userXy(us(i))`, it's implicitly only adding where P is 1. That had been true for any us(i) that is iterated over, before, since these are exactly the ones for which P is 1. But now P is zero where rs(i) <= 0, and should not be added

I think it's a safe change because:
* It doesn't change any existing behavior (unless you're using negative values, in which case results are already borked)
* It's the simplest direct extension of the paper's algorithm
* (I've used it to good effect in production FWIW)

Tests included.

I tweaked minor things en route:
* `ALS.scala` javadoc writes "R = Xt*Y" when the paper and rest of code defines it as "R = X*Yt"
* RMSE in the ALS tests uses a confidence-weighted mean, but the denominator is not actually sum of weights

Excuse my Scala style; I'm sure it needs tweaks.

Author: Sean Owen <sowen@cloudera.com>

Closes #500 from srowen/ALSNegativeImplicitInput and squashes the following commits:

cf902a9 [Sean Owen] Support negative implicit input in ALS
953be1c [Sean Owen] Make weighted RMSE in ALS test actually weighted; adjust comment about R = X*Yt
2014-02-19 23:44:53 -08:00
assembly Merge pull request #542 from markhamstra/versionBump. Closes #542. 2014-02-08 16:00:43 -08:00
bagel Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00
bin [SPARK-1090] improvement on spark_shell (help information, configure memory) 2014-02-17 15:12:52 -08:00
conf Typo: Standlone -> Standalone 2014-02-14 10:01:01 -08:00
core Optimized imports 2014-02-18 14:44:36 -08:00
data moved user scripts to bin folder 2013-09-23 12:46:48 +08:00
dev SPARK-1073 Keep GitHub pull request title as commit summary 2014-02-12 23:23:06 -08:00
docker A little revise for the document 2013-10-29 00:28:56 +08:00
docs [SPARK-1105] fix site scala version error in docs 2014-02-19 15:54:03 -08:00
ec2 SPARK-1106: check key name and identity file before launch a cluster 2014-02-18 18:30:02 -08:00
examples Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00
external Merge pull request #557 from ScrapCodes/style. Closes #557. 2014-02-09 10:09:19 -08:00
graphx Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00
mllib MLLIB-22. Support negative implicit input in ALS 2014-02-19 23:44:53 -08:00
project Ported hadoopClient jar for < 1.0.1 fix 2014-02-12 23:42:25 -08:00
python Merge pull request #562 from jyotiska/master. Closes #562. 2014-02-08 23:36:48 -08:00
repl [SPARK-1090] improvement on spark_shell (help information, configure memory) 2014-02-17 15:12:52 -08:00
sbin Update stop-slaves.sh 2014-01-07 11:11:59 +08:00
sbt Merge pull request #454 from jey/atomic-sbt-download. Closes #454. 2014-02-08 12:24:08 -08:00
streaming Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00
tools Merge pull request #557 from ScrapCodes/style. Closes #557. 2014-02-09 10:09:19 -08:00
yarn Merge pull request #542 from markhamstra/versionBump. Closes #542. 2014-02-08 16:00:43 -08:00
.gitignore Restricting /lib to top level directory in .gitignore 2014-01-20 20:39:30 -08:00
LICENSE Updated LICENSE with third-party licenses 2013-09-02 16:43:06 -07:00
make-distribution.sh fix make-distribution.sh show version: command not found 2014-01-09 00:34:53 +08:00
NOTICE Add Apache license headers and LICENSE and NOTICE files 2013-07-16 17:21:33 -07:00
pom.xml Merge pull request #542 from markhamstra/versionBump. Closes #542. 2014-02-08 16:00:43 -08:00
README.md Update README.md 2014-01-08 11:36:26 +05:30
scalastyle-config.xml Merge pull request #567 from ScrapCodes/style2. 2014-02-09 22:17:52 -08:00

Apache Spark

Lightning-Fast Cluster Computing - http://spark.incubator.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.incubator.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.10. The project is built using Simple Build Tool (SBT), which can be obtained here. If SBT is installed we will use the system version of sbt otherwise we will attempt to download it automatically. To build Spark and its example programs, run:

./sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./bin/spark-shell

Or, for the Python API, the Python shell (./bin/pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> <params>. For example:

./bin/run-example org.apache.spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

Running tests

Testing first requires Building Spark. Once Spark is built, tests can be run using:

./sbt/sbt test

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.2.X, 2.1.X, 2.0.X, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

# Apache Hadoop 2.2.X and newer
$ SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt assembly

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.2.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Apache Incubator Notice

Apache Spark is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.