Fix the style violation (space before , and :).
This PR is a followup for #10643.
Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
Closes#10684 from sarutak/SPARK-12692-followup-mllib.
Fix most build warnings: mostly deprecated API usages. I'll annotate some of the changes below. CC rxin who is leading the charge to remove the deprecated APIs.
Author: Sean Owen <sowen@cloudera.com>
Closes#10570 from srowen/SPARK-12618.
For the BinaryClassificationEvaluator, the scaladoc doesn't mention that "areaUnderPR" is supported, only that the default is "areadUnderROC".
Also, in the documentation, it is said that:
"The default metric used to choose the best ParamMap can be overriden by the setMetric method in each of these evaluators."
However, the method is called setMetricName.
This PR aims to fix both issues.
Author: BenFradet <benjamin.fradet@gmail.com>
Closes#10328 from BenFradet/SPARK-12368.
Modified the definition of R^2 for regression through origin. Added modified test for regression metrics.
Author: Imran Younus <iyounus@us.ibm.com>
Author: Imran Younus <imranyounus@gmail.com>
Closes#10384 from iyounus/SPARK_12331_R2_for_regression_through_origin.
DecisionTreeRegressor will provide variance of prediction as a Double column.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#8866 from yanboliang/spark-9622.
callUDF has been deprecated. However, we do not have an alternative for users to specify the output data type without type tags. This pull request introduced a new API for that, and replaces the invocation of the deprecated callUDF with that.
Author: Reynold Xin <rxin@databricks.com>
Closes#10547 from rxin/SPARK-12599.
A slight adjustment to the checker configuration was needed; there is
a handful of warnings still left, but those are because of a bug in
the checker that I'll fix separately (before enabling errors for the
checker, of course).
Author: Marcelo Vanzin <vanzin@cloudera.com>
Closes#10535 from vanzin/SPARK-3873-mllib.
Include the following changes:
1. Close `java.sql.Statement`
2. Fix incorrect `asInstanceOf`.
3. Remove unnecessary `synchronized` and `ReentrantLock`.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes#10440 from zsxwing/findbugs.
ParamMap#filter uses `mutable.Map#filterKeys`. The return type of `filterKey` is collection.Map, not mutable.Map but the result is casted to mutable.Map using `asInstanceOf` so we get `ClassCastException`.
Also, the return type of Map#filterKeys is not Serializable. It's the issue of Scala (https://issues.scala-lang.org/browse/SI-6654).
Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
Closes#10381 from sarutak/SPARK-12424.
Restore the original value of os.arch property after each test
Since some of tests forced to set the specific value to os.arch property, we need to set the original value.
Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
Closes#10289 from kiszk/SPARK-12311.
Only load explainedVariance in PCAModel if it was written with Spark > 1.6.x
jkbradley is this kind of what you had in mind?
Author: Sean Owen <sowen@cloudera.com>
Closes#10327 from srowen/SPARK-12349.
Added catch for casting Long to Int exception when PySpark ALS Ratings are serialized. It is easy to accidentally use Long IDs for user/product and before, it would fail with a somewhat cryptic "ClassCastException: java.lang.Long cannot be cast to java.lang.Integer." Now if this is done, a more descriptive error is shown, e.g. "PickleException: Ratings id 1205640308657491975 exceeds max integer value of 2147483647."
Author: Bryan Cutler <bjcutler@us.ibm.com>
Closes#9361 from BryanCutler/als-pyspark-long-id-error-SPARK-10158.
Use ```sqlContext``` from ```MLlibTestSparkContext``` rather than creating new one for spark.ml test suites. I have checked thoroughly and found there are four test cases need to update.
cc mengxr jkbradley
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#10279 from yanboliang/spark-12309.
JIRA: https://issues.apache.org/jira/browse/SPARK-12016
We should not directly use Word2VecModel in pyspark. We need to wrap it in a Word2VecModelWrapper when loading it in pyspark.
Author: Liang-Chi Hsieh <viirya@appier.com>
Closes#10100 from viirya/fix-load-py-wordvecmodel.
As noted in PR #9441, implementing `tallSkinnyQR` uncovered a bug with our PySpark `RowMatrix` constructor. As discussed on the dev list [here](http://apache-spark-developers-list.1001551.n3.nabble.com/K-Means-And-Class-Tags-td10038.html), there appears to be an issue with type erasure with RDDs coming from Java, and by extension from PySpark. Although we are attempting to construct a `RowMatrix` from an `RDD[Vector]` in [PythonMLlibAPI](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala#L1115), the `Vector` type is erased, resulting in an `RDD[Object]`. Thus, when calling Scala's `tallSkinnyQR` from PySpark, we get a Java `ClassCastException` in which an `Object` cannot be cast to a Spark `Vector`. As noted in the aforementioned dev list thread, this issue was also encountered with `DecisionTrees`, and the fix involved an explicit `retag` of the RDD with a `Vector` type. `IndexedRowMatrix` and `CoordinateMatrix` do not appear to have this issue likely due to their related helper functions in `PythonMLlibAPI` creating the RDDs explicitly from DataFrames with pattern matching, thus preserving the types.
This PR currently contains that retagging fix applied to the `createRowMatrix` helper function in `PythonMLlibAPI`. This PR blocks #9441, so once this is merged, the other can be rebased.
cc holdenk
Author: Mike Dusenberry <mwdusenb@us.ibm.com>
Closes#9458 from dusenberrymw/SPARK-11497_PySpark_RowMatrix_Constructor_Has_Type_Erasure_Issue.
LogisticRegression training summary should still function if the predictionCol is set to an empty string or otherwise unset (related too https://issues.apache.org/jira/browse/SPARK-9718 )
Author: Holden Karau <holden@pigscanfly.ca>
Author: Holden Karau <holden@us.ibm.com>
Closes#9037 from holdenk/SPARK-10991-LogisticRegressionTrainingSummary-handle-empty-prediction-col.
Add `computePrincipalComponentsAndVariance` to also compute PCA's explained variance.
CC mengxr
Author: Sean Owen <sowen@cloudera.com>
Closes#9736 from srowen/SPARK-11530.
Currently word2vec has the window hard coded at 5, some users may want different sizes (for example if using on n-gram input or similar). User request comes from http://stackoverflow.com/questions/32231975/spark-word2vec-window-size .
Author: Holden Karau <holden@us.ibm.com>
Author: Holden Karau <holden@pigscanfly.ca>
Closes#8513 from holdenk/SPARK-10299-word2vec-should-allow-users-to-specify-the-window-size.
felixcheung , mengxr
Just added a message to require()
Author: Dominik Dahlem <dominik.dahlem@gmail.combination>
Closes#9598 from dahlem/ddahlem_regression_evaluator_double_predictions_message_04112015.
jira: https://issues.apache.org/jira/browse/SPARK-11605
Check Java compatibility for MLlib for this release.
fix:
1. `StreamingTest.registerStream` needs java friendly interface.
2. `GradientBoostedTreesModel.computeInitialPredictionAndError` and `GradientBoostedTreesModel.updatePredictionError` has java compatibility issue. Mark them as `developerAPI`.
TBD:
[updated] no fix for now per discussion.
`org.apache.spark.mllib.classification.LogisticRegressionModel`
`public scala.Option<java.lang.Object> getThreshold();` has wrong return type for Java invocation.
`SVMModel` has the similar issue.
Yet adding a `scala.Option<java.util.Double> getThreshold()` would result in an overloading error due to the same function signature. And adding a new function with different name seems to be not necessary.
cc jkbradley feynmanliang
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes#10102 from hhbyyh/javaAPI.
Sparse feature generated in LinearDataGenerator does not create dense vectors as an intermediate any more.
Author: Nakul Jindal <njindal@us.ibm.com>
Closes#9756 from nakul02/SPARK-11439_sparse_without_creating_dense_feature.
Add ```SQLTransformer``` user guide, example code and make Scala API doc more clear.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#10006 from yanboliang/spark-11958.
Switched from using SQLContext constructor to using getOrCreate, mainly in model save/load methods.
This covers all instances in spark.mllib. There were no uses of the constructor in spark.ml.
CC: mengxr yhuai
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#10161 from jkbradley/mllib-sqlcontext-fix.
jira: https://issues.apache.org/jira/browse/SPARK-12096
word2vec now can handle much bigger vocabulary.
The old constraint vocabSize.toLong * vectorSize < Ine.max / 8 should be removed.
new constraint is vocabSize.toLong * vectorSize < max array length (usually a little less than Int.MaxValue)
I tested with vocabsize over 18M and vectorsize = 100.
srowen jkbradley Sorry to miss this in last PR. I was reminded today.
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes#10103 from hhbyyh/w2vCapacity.
We should upgrade to SBT 0.13.9, since this is a requirement in order to use SBT's new Maven-style resolution features (which will be done in a separate patch, because it's blocked by some binary compatibility issues in the POM reader plugin).
I also upgraded Scalastyle to version 0.8.0, which was necessary in order to fix a Scala 2.10.5 compatibility issue (see https://github.com/scalastyle/scalastyle/issues/156). The newer Scalastyle is slightly stricter about whitespace surrounding tokens, so I fixed the new style violations.
Author: Josh Rosen <joshrosen@databricks.com>
Closes#10112 from JoshRosen/upgrade-to-sbt-0.13.9.
This replaces https://github.com/apache/spark/pull/9696
Invoke Checkstyle and print any errors to the console, failing the step.
Use Google's style rules modified according to
https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
Some important checks are disabled (see TODOs in `checkstyle.xml`) due to
multiple violations being present in the codebase.
Suggest fixing those TODOs in a separate PR(s).
More on Checkstyle can be found on the [official website](http://checkstyle.sourceforge.net/).
Sample output (from [build 46345](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/46345/consoleFull)) (duplicated because I run the build twice with different profiles):
> Checkstyle checks failed at following occurrences:
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/UnsafeRowParquetRecordReader.java:[217,7] (coding) MissingSwitchDefault: switch without "default" clause.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[198,10] (modifier) ModifierOrder: 'protected' modifier out of order with the JLS suggestions.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/UnsafeRowParquetRecordReader.java:[217,7] (coding) MissingSwitchDefault: switch without "default" clause.
> [ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[198,10] (modifier) ModifierOrder: 'protected' modifier out of order with the JLS suggestions.
> [error] running /home/jenkins/workspace/SparkPullRequestBuilder2/dev/lint-java ; received return code 1
Also fix some of the minor violations that didn't require sweeping changes.
Apologies for the previous botched PRs - I finally figured out the issue.
cr: JoshRosen, pwendell
> I state that the contribution is my original work, and I license the work to the project under the project's open source license.
Author: Dmitry Erastov <derastov@gmail.com>
Closes#9867 from dskrvk/master.
This fixes SPARK-12000, verified on my local with JDK 7. It seems that `scaladoc` try to match method names and messed up with annotations.
cc: JoshRosen jkbradley
Author: Xiangrui Meng <meng@databricks.com>
Closes#10114 from mengxr/SPARK-12000.2.
cc mengxr noel-smith
I worked on this issues based on https://github.com/apache/spark/pull/8729.
ehsanmok thank you for your contricution!
Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
Author: Ehsan M.Kermani <ehsanmo1367@gmail.com>
Closes#9338 from yu-iskw/JIRA-10266.
jira: https://issues.apache.org/jira/browse/SPARK-11898
syn0Global and sync1Global in word2vec are quite large objects with size (vocab * vectorSize * 8), yet they are passed to worker using basic task serialization.
Use broadcast can greatly improve the performance. My benchmark shows that, for 1M vocabulary and default vectorSize 100, changing to broadcast can help,
1. decrease the worker memory consumption by 45%.
2. decrease running time by 40%.
This will also help extend the upper limit for Word2Vec.
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes#9878 from hhbyyh/w2vBC.
Add read/write support to LDA, similar to ALS.
save/load for ml.LocalLDAModel is done.
For DistributedLDAModel, I'm not sure if we can invoke save on the mllib.DistributedLDAModel directly. I'll send update after some test.
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes#9894 from hhbyyh/ldaMLsave.
Doc for 1.6 that the summaries mostly ignore the weight column.
To be corrected for 1.7
CC: mengxr thunterdb
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#9927 from jkbradley/linregsummary-doc.
There is an unhandled case in the transform method of VectorAssembler if one of the input columns doesn't have one of the supported type DoubleType, NumericType, BooleanType or VectorUDT.
So, if you try to transform a column of StringType you get a cryptic "scala.MatchError: StringType".
This PR aims to fix this, throwing a SparkException when dealing with an unknown column type.
Author: BenFradet <benjamin.fradet@gmail.com>
Closes#9885 from BenFradet/SPARK-11902.
Like [SPARK-11852](https://issues.apache.org/jira/browse/SPARK-11852), ```k``` is params and we should save it under ```metadata/``` rather than both under ```data/``` and ```metadata/```. Refactor the constructor of ```ml.feature.PCAModel``` to take only ```pc``` but construct ```mllib.feature.PCAModel``` inside ```transform```.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#9897 from yanboliang/spark-11912.
I believe this works for general estimators within CrossValidator, including compound estimators. (See the complex unit test.)
Added read/write for all 3 Evaluators as well.
CC: mengxr yanboliang
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#9848 from jkbradley/cv-io.
```withStd``` and ```withMean``` should be params of ```StandardScaler``` and ```StandardScalerModel```.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#9839 from yanboliang/standardScaler-refactor.
Need to remove parent directory (```className```) rather than just tempDir (```className/random_name```)
I tested this with IDFSuite, which has 2 read/write tests, and it fixes the problem.
CC: mengxr Can you confirm this is fine? I believe it is since the same ```random_name``` is used for all tests in a suite; we basically have an extra unneeded level of nesting.
Author: Joseph K. Bradley <joseph@databricks.com>
Closes#9851 from jkbradley/tempdir-cleanup.
Add read/write support to the following estimators under spark.ml:
* ChiSqSelector
* PCA
* VectorIndexer
* Word2Vec
Author: Yanbo Liang <ybliang8@gmail.com>
Closes#9838 from yanboliang/spark-11829.