spark-instrumented-optimizer/docs/mllib-guide.md
sethah 2a9fe4a4e7 [SPARK-6129] [MLLIB] [DOCS] Added user guide for evaluation metrics
Author: sethah <seth.hendrickson16@gmail.com>

Closes #7655 from sethah/Working_on_6129 and squashes the following commits:

253db2d [sethah] removed number formatting from example code
b769cab [sethah] rewording threshold section
d5dad4d [sethah] adding some explanations of concepts to the eval metrics user guide
3a61ff9 [sethah] Removing unnecessary latex commands from metrics guide
c9dd058 [sethah] Cleaning up and formatting metrics user guide section
6f31c21 [sethah] All example code for metrics section done
98813fe [sethah] Most java and python example code added. Further latex formatting
53a24fc [sethah] Adding documentations of metrics for ML algorithms to user guide
2015-07-29 18:23:07 -07:00

6.1 KiB

layout title displayTitle description
global MLlib Machine Learning Library (MLlib) Guide MLlib machine learning library overview for Spark SPARK_VERSION_SHORT

MLlib is Spark's scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives. Guides for individual algorithms are listed below.

The API is divided into 2 parts:

We list major functionality from both below, with links to detailed guides.

MLlib types, algorithms and utilities

This lists functionality included in spark.mllib, the main MLlib API.

MLlib is under active development. The APIs marked Experimental/DeveloperApi may change in future releases, and the migration guide below will explain all changes between releases.

spark.ml: high-level APIs for ML pipelines

Spark 1.2 introduced a new package called spark.ml, which aims to provide a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.

Graduated from Alpha! The Pipelines API is no longer an alpha component, although many elements of it are still Experimental or DeveloperApi.

Note that we will keep supporting and adding features to spark.mllib along with the development of spark.ml. Users should be comfortable using spark.mllib features and expect more features coming. Developers should contribute new algorithms to spark.mllib and can optionally contribute to spark.ml.

More detailed guides for spark.ml include:

  • spark.ml programming guide: overview of the Pipelines API and major concepts
  • Feature transformers: Details on transformers supported in the Pipelines API, including a few not in the lower-level spark.mllib API
  • Ensembles: Details on ensemble learning methods in the Pipelines API

Dependencies

MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If natives are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.

To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday's ScalaX talk on High Performance Linear Algebra in Scala).

Due to licensing issues with runtime proprietary binaries, we do not include netlib-java's native proxies by default. To configure netlib-java / Breeze to use system optimised binaries, include com.github.fommil.netlib:all:1.1.2 (or build Spark with -Pnetlib-lgpl) as a dependency of your project and read the netlib-java documentation for your platform's additional installation instructions.

To use MLlib in Python, you will need NumPy version 1.4 or newer.


Migration Guide

For the spark.ml package, please see the spark.ml Migration Guide.

From 1.3 to 1.4

In the spark.mllib package, there were several breaking changes, but all in DeveloperApi or Experimental APIs:

  • Gradient-Boosted Trees
    • (Breaking change) The signature of the Loss.gradient method was changed. This is only an issues for users who wrote their own losses for GBTs.
    • (Breaking change) The apply and copy methods for the case class BoostingStrategy have been changed because of a modification to the case class fields. This could be an issue for users who use BoostingStrategy to set GBT parameters.
  • (Breaking change) The return value of LDA.run has changed. It now returns an abstract class LDAModel instead of the concrete class DistributedLDAModel. The object of type LDAModel can still be cast to the appropriate concrete type, which depends on the optimization algorithm.

Previous Spark Versions

Earlier migration guides are archived on this page.