Various ML guide cleanups. * ml-guide.md: Make it easier to access the algorithm-specific guides. * LDA user guide: EM often begins with useless topics, but running longer generally improves them dramatically. E.g., 10 iterations on a Wikipedia dataset produces useless topics, but 50 iterations produces very meaningful topics. * mllib-feature-extraction.html#elementwiseproduct: “w” parameter should be “scalingVec” * Clean up Binarizer user guide a little. * Document in Pipeline that users should not put an instance into the Pipeline in more than 1 place. * spark.ml Word2Vec user guide: clean up grammar/writing * Chi Sq Feature Selector docs: Improve text in doc. CC: mengxr feynmanliang Author: Joseph K. Bradley <joseph@databricks.com> Closes #8752 from jkbradley/mlguide-fixes-1.5.
6.6 KiB
layout | title | displayTitle | description |
---|---|---|---|
global | MLlib | Machine Learning Library (MLlib) Guide | MLlib machine learning library overview for Spark SPARK_VERSION_SHORT |
MLlib is Spark's machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs.
It divides into two packages:
spark.mllib
contains the original API built on top of RDDs.spark.ml
provides higher-level API built on top of DataFrames for constructing ML pipelines.
Using spark.ml
is recommended because with DataFrames the API is more versatile and flexible.
But we will keep supporting spark.mllib
along with the development of spark.ml
.
Users should be comfortable using spark.mllib
features and expect more features coming.
Developers should contribute new algorithms to spark.ml
if they fit the ML pipeline concept well,
e.g., feature extractors and transformers.
We list major functionality from both below, with links to detailed guides.
spark.mllib: data types, algorithms, and utilities
- Data types
- Basic statistics
- Classification and regression
- Collaborative filtering
- Clustering
- Dimensionality reduction
- Feature extraction and transformation
- Frequent pattern mining
- Evaluation metrics
- PMML model export
- Optimization (developer)
spark.ml: high-level APIs for ML pipelines
spark.ml programming guide provides an overview of the Pipelines API and major concepts. It also contains sections on using algorithms within the Pipelines API, for example:
- Feature extraction, transformation, and selection
- Decision trees for classification and regression
- Ensembles
- Linear methods with elastic net regularization
- Multilayer perceptron classifier
Dependencies
MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If natives libraries1 are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.
Due to licensing issues with runtime proprietary binaries, we do not include netlib-java
's native
proxies by default.
To configure netlib-java
/ Breeze to use system optimised binaries, include
com.github.fommil.netlib:all:1.1.2
(or build Spark with -Pnetlib-lgpl
) as a dependency of your
project and read the netlib-java documentation for your
platform's additional installation instructions.
To use MLlib in Python, you will need NumPy version 1.4 or newer.
Migration guide
MLlib is under active development.
The APIs marked Experimental
/DeveloperApi
may change in future releases,
and the migration guide below will explain all changes between releases.
From 1.4 to 1.5
In the spark.mllib
package, there are no break API changes but several behavior changes:
- SPARK-9005:
RegressionMetrics.explainedVariance
returns the average regression sum of squares. - SPARK-8600:
NaiveBayesModel.labels
become sorted. - SPARK-3382:
GradientDescent
has a default convergence tolerance1e-3
, and hence iterations might end earlier than 1.4.
In the spark.ml
package, there exists one break API change and one behavior change:
- SPARK-9268: Java's varargs support is removed
from
Params.setDefault
due to a Scala compiler bug. - SPARK-10097:
Evaluator.isLargerBetter
is added to indicate metric ordering. Metrics like RMSE no longer flip signs as in 1.4.
Previous Spark versions
Earlier migration guides are archived on this page.
-
To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday's ScalaX talk on High Performance Linear Algebra in Scala. ↩︎