d8430148ee
RandomRDDGenerators but without support for randomRDD and randomVectorRDD, which take in arbitrary DistributionGenerator. `randomRDD.py` is named to avoid collision with the built-in Python `random` package. Author: Doris Xin <doris.s.xin@gmail.com> Closes #1628 from dorx/pythonRDD and squashes the following commits: 55c6de8 [Doris Xin] review comments. all python units passed. f831d9b [Doris Xin] moved default args logic into PythonMLLibAPI 2d73917 [Doris Xin] fix for linalg.py 8663e6a [Doris Xin] reverting back to a single python file for random f47c481 [Doris Xin] docs update 687aac0 [Doris Xin] add RandomRDDGenerators.py to run-tests 4338f40 [Doris Xin] renamed randomRDD to rand and import as random 29d205e [Doris Xin] created mllib.random package bd2df13 [Doris Xin] typos 07ddff2 [Doris Xin] units passed. 23b2ecd [Doris Xin] WIP
74 lines
2.9 KiB
Python
74 lines
2.9 KiB
Python
#
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
# this work for additional information regarding copyright ownership.
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
# (the "License"); you may not use this file except in compliance with
|
|
# the License. You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
#
|
|
|
|
"""
|
|
PySpark is the Python API for Spark.
|
|
|
|
Public classes:
|
|
|
|
- L{SparkContext<pyspark.context.SparkContext>}
|
|
Main entry point for Spark functionality.
|
|
- L{RDD<pyspark.rdd.RDD>}
|
|
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
|
|
- L{Broadcast<pyspark.broadcast.Broadcast>}
|
|
A broadcast variable that gets reused across tasks.
|
|
- L{Accumulator<pyspark.accumulators.Accumulator>}
|
|
An "add-only" shared variable that tasks can only add values to.
|
|
- L{SparkConf<pyspark.conf.SparkConf>}
|
|
For configuring Spark.
|
|
- L{SparkFiles<pyspark.files.SparkFiles>}
|
|
Access files shipped with jobs.
|
|
- L{StorageLevel<pyspark.storagelevel.StorageLevel>}
|
|
Finer-grained cache persistence levels.
|
|
|
|
Spark SQL:
|
|
- L{SQLContext<pyspark.sql.SQLContext>}
|
|
Main entry point for SQL functionality.
|
|
- L{SchemaRDD<pyspark.sql.SchemaRDD>}
|
|
A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
|
|
addition to normal RDD operations, SchemaRDDs also support SQL.
|
|
- L{Row<pyspark.sql.Row>}
|
|
A Row of data returned by a Spark SQL query.
|
|
|
|
Hive:
|
|
- L{HiveContext<pyspark.context.HiveContext>}
|
|
Main entry point for accessing data stored in Apache Hive..
|
|
"""
|
|
|
|
# The following block allows us to import python's random instead of mllib.random for scripts in
|
|
# mllib that depend on top level pyspark packages, which transitively depend on python's random.
|
|
# Since Python's import logic looks for modules in the current package first, we eliminate
|
|
# mllib.random as a candidate for C{import random} by removing the first search path, the script's
|
|
# location, in order to force the loader to look in Python's top-level modules for C{random}.
|
|
import sys
|
|
s = sys.path.pop(0)
|
|
import random
|
|
sys.path.insert(0, s)
|
|
|
|
from pyspark.conf import SparkConf
|
|
from pyspark.context import SparkContext
|
|
from pyspark.sql import SQLContext
|
|
from pyspark.rdd import RDD
|
|
from pyspark.sql import SchemaRDD
|
|
from pyspark.sql import Row
|
|
from pyspark.files import SparkFiles
|
|
from pyspark.storagelevel import StorageLevel
|
|
|
|
|
|
__all__ = ["SparkConf", "SparkContext", "SQLContext", "RDD", "SchemaRDD",
|
|
"SparkFiles", "StorageLevel", "Row"]
|