spark-instrumented-optimizer/docs/mllib-guide.md
Vincenzo Selvaggio 814b3dabdf [SPARK-7272] [MLLIB] User guide for PMML model export
https://issues.apache.org/jira/browse/SPARK-7272

Author: Vincenzo Selvaggio <vselvaggio@hotmail.it>

Closes #6219 from selvinsource/mllib_pmml_model_export_SPARK-7272 and squashes the following commits:

c866fb8 [Vincenzo Selvaggio] Update mllib-pmml-model-export.md
1beda98 [Vincenzo Selvaggio] [SPARK-7272] Initial user guide for pmml export
d670662 [Vincenzo Selvaggio] Update mllib-pmml-model-export.md
2731375 [Vincenzo Selvaggio] Update mllib-pmml-model-export.md
680dc33 [Vincenzo Selvaggio] Update mllib-pmml-model-export.md
2e298b5 [Vincenzo Selvaggio] Update mllib-pmml-model-export.md
a932f51 [Vincenzo Selvaggio] Create mllib-pmml-model-export.md
2015-05-18 08:46:33 -07:00

6.3 KiB

layout title displayTitle description
global MLlib Machine Learning Library (MLlib) Guide MLlib machine learning library overview for Spark SPARK_VERSION_SHORT

MLlib is Spark's scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives, as outlined below:

MLlib is under active development. The APIs marked Experimental/DeveloperApi may change in future releases, and the migration guide below will explain all changes between releases.

spark.ml: high-level APIs for ML pipelines

Spark 1.2 introduced a new package called spark.ml, which aims to provide a uniform set of high-level APIs that help users create and tune practical machine learning pipelines. It is currently an alpha component, and we would like to hear back from the community about how it fits real-world use cases and how it could be improved.

Note that we will keep supporting and adding features to spark.mllib along with the development of spark.ml. Users should be comfortable using spark.mllib features and expect more features coming. Developers should contribute new algorithms to spark.mllib and can optionally contribute to spark.ml.

See the spark.ml programming guide for more information on this package.

Dependencies

MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If natives are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.

To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday's ScalaX talk on High Performance Linear Algebra in Scala).

Due to licensing issues with runtime proprietary binaries, we do not include netlib-java's native proxies by default. To configure netlib-java / Breeze to use system optimised binaries, include com.github.fommil.netlib:all:1.1.2 (or build Spark with -Pnetlib-lgpl) as a dependency of your project and read the netlib-java documentation for your platform's additional installation instructions.

To use MLlib in Python, you will need NumPy version 1.4 or newer.


Migration Guide

For the spark.ml package, please see the spark.ml Migration Guide.

From 1.2 to 1.3

In the spark.mllib package, there were several breaking changes. The first change (in ALS) is the only one in a component not marked as Alpha or Experimental.

  • (Breaking change) In ALS, the extraneous method solveLeastSquares has been removed. The DeveloperApi method analyzeBlocks was also removed.
  • (Breaking change) StandardScalerModel remains an Alpha component. In it, the variance method has been replaced with the std method. To compute the column variance values returned by the original variance method, simply square the standard deviation values returned by std.
  • (Breaking change) StreamingLinearRegressionWithSGD remains an Experimental component. In it, there were two changes:
    • The constructor taking arguments was removed in favor of a builder patten using the default constructor plus parameter setter methods.
    • Variable model is no longer public.
  • (Breaking change) DecisionTree remains an Experimental component. In it and its associated classes, there were several changes:
    • In DecisionTree, the deprecated class method train has been removed. (The object/static train methods remain.)
    • In Strategy, the checkpointDir parameter has been removed. Checkpointing is still supported, but the checkpoint directory must be set before calling tree and tree ensemble training.
  • PythonMLlibAPI (the interface between Scala/Java and Python for MLlib) was a public API but is now private, declared private[python]. This was never meant for external use.
  • In linear regression (including Lasso and ridge regression), the squared loss is now divided by 2. So in order to produce the same result as in 1.2, the regularization parameter needs to be divided by 2 and the step size needs to be multiplied by 2.

Previous Spark Versions

Earlier migration guides are archived on this page.