[SPARK-2843][MLLIB] add a section about regularization parameter in ALS
atalwalkar srowen Author: Xiangrui Meng <meng@databricks.com> Closes #2064 from mengxr/als-doc and squashes the following commits: b2e20ab [Xiangrui Meng] introduced -> discussed 98abdd7 [Xiangrui Meng] add reference 339bd08 [Xiangrui Meng] add a section about regularization parameter in ALS
This commit is contained in:
parent
e1571874f2
commit
e0f946265b
|
@ -43,6 +43,17 @@ level of confidence in observed user preferences, rather than explicit ratings g
|
|||
model then tries to find latent factors that can be used to predict the expected preference of a
|
||||
user for an item.
|
||||
|
||||
### Scaling of the regularization parameter
|
||||
|
||||
Since v1.1, we scale the regularization parameter `lambda` in solving each least squares problem by
|
||||
the number of ratings the user generated in updating user factors,
|
||||
or the number of ratings the product received in updating product factors.
|
||||
This approach is named "ALS-WR" and discussed in the paper
|
||||
"[Large-Scale Parallel Collaborative Filtering for the Netflix Prize](http://dx.doi.org/10.1007/978-3-540-68880-8_32)".
|
||||
It makes `lambda` less dependent on the scale of the dataset.
|
||||
So we can apply the best parameter learned from a sampled subset to the full dataset
|
||||
and expect similar performance.
|
||||
|
||||
## Examples
|
||||
|
||||
<div class="codetabs">
|
||||
|
|
Loading…
Reference in a new issue