2013-09-02 15:23:03 -04:00
|
|
|
#
|
|
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
|
|
# this work for additional information regarding copyright ownership.
|
|
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
# (the "License"); you may not use this file except in compliance with
|
|
|
|
# the License. You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
#
|
|
|
|
|
2013-01-01 16:52:14 -05:00
|
|
|
"""
|
2013-09-02 15:23:03 -04:00
|
|
|
PySpark is the Python API for Spark.
|
2013-01-01 16:52:14 -05:00
|
|
|
|
|
|
|
Public classes:
|
|
|
|
|
2014-10-07 21:09:27 -04:00
|
|
|
- :class:`SparkContext`:
|
2013-12-29 20:15:07 -05:00
|
|
|
Main entry point for Spark functionality.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`RDD`:
|
2013-12-29 20:15:07 -05:00
|
|
|
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`Broadcast`:
|
2013-12-29 20:15:07 -05:00
|
|
|
A broadcast variable that gets reused across tasks.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`Accumulator`:
|
2013-12-29 20:15:07 -05:00
|
|
|
An "add-only" shared variable that tasks can only add values to.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`SparkConf`:
|
2013-12-29 20:15:07 -05:00
|
|
|
For configuring Spark.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`SparkFiles`:
|
2013-12-29 20:15:07 -05:00
|
|
|
Access files shipped with jobs.
|
2015-02-17 16:36:43 -05:00
|
|
|
- :class:`StorageLevel`:
|
2013-12-29 20:15:07 -05:00
|
|
|
Finer-grained cache persistence levels.
|
2016-12-20 18:51:21 -05:00
|
|
|
- :class:`TaskContext`:
|
2017-06-19 15:35:58 -04:00
|
|
|
Information about the current running task, available on the workers and experimental.
|
SPARK-1374: PySpark API for SparkSQL
An initial API that exposes SparkSQL functionality in PySpark. A PythonRDD composed of dictionaries, with string keys and primitive values (boolean, float, int, long, string) can be converted into a SchemaRDD that supports sql queries.
```
from pyspark.context import SQLContext
sqlCtx = SQLContext(sc)
rdd = sc.parallelize([{"field1" : 1, "field2" : "row1"}, {"field1" : 2, "field2": "row2"}, {"field1" : 3, "field2": "row3"}])
srdd = sqlCtx.applySchema(rdd)
sqlCtx.registerRDDAsTable(srdd, "table1")
srdd2 = sqlCtx.sql("SELECT field1 AS f1, field2 as f2 from table1")
srdd2.collect()
```
The last line yields ```[{"f1" : 1, "f2" : "row1"}, {"f1" : 2, "f2": "row2"}, {"f1" : 3, "f2": "row3"}]```
Author: Ahir Reddy <ahirreddy@gmail.com>
Author: Michael Armbrust <michael@databricks.com>
Closes #363 from ahirreddy/pysql and squashes the following commits:
0294497 [Ahir Reddy] Updated log4j properties to supress Hive Warns
307d6e0 [Ahir Reddy] Style fix
6f7b8f6 [Ahir Reddy] Temporary fix MIMA checker. Since we now assemble Spark jar with Hive, we don't want to check the interfaces of all of our hive dependencies
3ef074a [Ahir Reddy] Updated documentation because classes moved to sql.py
29245bf [Ahir Reddy] Cache underlying SchemaRDD instead of generating and caching PythonRDD
f2312c7 [Ahir Reddy] Moved everything into sql.py
a19afe4 [Ahir Reddy] Doc fixes
6d658ba [Ahir Reddy] Remove the metastore directory created by the HiveContext tests in SparkSQL
521ff6d [Ahir Reddy] Trying to get spark to build with hive
ab95eba [Ahir Reddy] Set SPARK_HIVE=true on jenkins
ded03e7 [Ahir Reddy] Added doc test for HiveContext
22de1d4 [Ahir Reddy] Fixed maven pyrolite dependency
e4da06c [Ahir Reddy] Display message if hive is not built into spark
227a0be [Michael Armbrust] Update API links. Fix Hive example.
58e2aa9 [Michael Armbrust] Build Docs for pyspark SQL Api. Minor fixes.
4285340 [Michael Armbrust] Fix building of Hive API Docs.
38a92b0 [Michael Armbrust] Add note to future non-python developers about python docs.
337b201 [Ahir Reddy] Changed com.clearspring.analytics stream version from 2.4.0 to 2.5.1 to match SBT build, and added pyrolite to maven build
40491c9 [Ahir Reddy] PR Changes + Method Visibility
1836944 [Michael Armbrust] Fix comments.
e00980f [Michael Armbrust] First draft of python sql programming guide.
b0192d3 [Ahir Reddy] Added Long, Double and Boolean as usable types + unit test
f98a422 [Ahir Reddy] HiveContexts
79621cf [Ahir Reddy] cleaning up cruft
b406ba0 [Ahir Reddy] doctest formatting
20936a5 [Ahir Reddy] Added tests and documentation
e4d21b4 [Ahir Reddy] Added pyrolite dependency
79f739d [Ahir Reddy] added more tests
7515ba0 [Ahir Reddy] added more tests :)
d26ec5e [Ahir Reddy] added test
e9f5b8d [Ahir Reddy] adding tests
906d180 [Ahir Reddy] added todo explaining cost of creating Row object in python
251f99d [Ahir Reddy] for now only allow dictionaries as input
09b9980 [Ahir Reddy] made jrdd explicitly lazy
c608947 [Ahir Reddy] SchemaRDD now has all RDD operations
725c91e [Ahir Reddy] awesome row objects
55d1c76 [Ahir Reddy] return row objects
4fe1319 [Ahir Reddy] output dictionaries correctly
be079de [Ahir Reddy] returning dictionaries works
cd5f79f [Ahir Reddy] Switched to using Scala SQLContext
e948bd9 [Ahir Reddy] yippie
4886052 [Ahir Reddy] even better
c0fb1c6 [Ahir Reddy] more working
043ca85 [Ahir Reddy] working
5496f9f [Ahir Reddy] doesn't crash
b8b904b [Ahir Reddy] Added schema rdd class
67ba875 [Ahir Reddy] java to python, and python to java
bcc0f23 [Ahir Reddy] Java to python
ab6025d [Ahir Reddy] compiling
2014-04-15 03:07:55 -04:00
|
|
|
|
2013-01-01 16:52:14 -05:00
|
|
|
"""
|
2013-12-29 20:15:07 -05:00
|
|
|
|
2016-04-20 13:32:01 -04:00
|
|
|
from functools import wraps
|
2016-03-14 22:25:49 -04:00
|
|
|
import types
|
|
|
|
|
2013-12-29 14:03:39 -05:00
|
|
|
from pyspark.conf import SparkConf
|
2012-12-29 01:51:28 -05:00
|
|
|
from pyspark.context import SparkContext
|
2013-01-01 16:52:14 -05:00
|
|
|
from pyspark.rdd import RDD
|
2013-01-21 19:42:24 -05:00
|
|
|
from pyspark.files import SparkFiles
|
2013-09-07 17:41:31 -04:00
|
|
|
from pyspark.storagelevel import StorageLevel
|
2014-09-03 14:49:45 -04:00
|
|
|
from pyspark.accumulators import Accumulator, AccumulatorParam
|
|
|
|
from pyspark.broadcast import Broadcast
|
|
|
|
from pyspark.serializers import MarshalSerializer, PickleSerializer
|
2015-02-17 16:36:43 -05:00
|
|
|
from pyspark.status import *
|
2016-12-20 18:51:21 -05:00
|
|
|
from pyspark.taskcontext import TaskContext
|
2015-01-28 16:48:06 -05:00
|
|
|
from pyspark.profiler import Profiler, BasicProfiler
|
2016-11-16 17:22:15 -05:00
|
|
|
from pyspark.version import __version__
|
2018-02-09 01:21:10 -05:00
|
|
|
from pyspark._globals import _NoValue
|
2012-12-29 01:51:28 -05:00
|
|
|
|
2015-09-08 23:56:22 -04:00
|
|
|
|
|
|
|
def since(version):
|
|
|
|
"""
|
|
|
|
A decorator that annotates a function to append the version of Spark the function was added.
|
|
|
|
"""
|
|
|
|
import re
|
|
|
|
indent_p = re.compile(r'\n( +)')
|
|
|
|
|
|
|
|
def deco(f):
|
|
|
|
indents = indent_p.findall(f.__doc__)
|
|
|
|
indent = ' ' * (min(len(m) for m in indents) if indents else 0)
|
|
|
|
f.__doc__ = f.__doc__.rstrip() + "\n\n%s.. versionadded:: %s" % (indent, version)
|
|
|
|
return f
|
|
|
|
return deco
|
|
|
|
|
|
|
|
|
2016-03-14 22:25:49 -04:00
|
|
|
def copy_func(f, name=None, sinceversion=None, doc=None):
|
|
|
|
"""
|
|
|
|
Returns a function with same code, globals, defaults, closure, and
|
|
|
|
name (or provide a new name).
|
|
|
|
"""
|
|
|
|
# See
|
|
|
|
# http://stackoverflow.com/questions/6527633/how-can-i-make-a-deepcopy-of-a-function-in-python
|
|
|
|
fn = types.FunctionType(f.__code__, f.__globals__, name or f.__name__, f.__defaults__,
|
|
|
|
f.__closure__)
|
|
|
|
# in case f was given attrs (note this dict is a shallow copy):
|
|
|
|
fn.__dict__.update(f.__dict__)
|
|
|
|
if doc is not None:
|
|
|
|
fn.__doc__ = doc
|
|
|
|
if sinceversion is not None:
|
|
|
|
fn = since(sinceversion)(fn)
|
|
|
|
return fn
|
|
|
|
|
|
|
|
|
2016-04-20 13:32:01 -04:00
|
|
|
def keyword_only(func):
|
|
|
|
"""
|
|
|
|
A decorator that forces keyword arguments in the wrapped method
|
|
|
|
and saves actual input keyword arguments in `_input_kwargs`.
|
2017-03-03 19:43:45 -05:00
|
|
|
|
|
|
|
.. note:: Should only be used to wrap a method where first arg is `self`
|
2016-04-20 13:32:01 -04:00
|
|
|
"""
|
|
|
|
@wraps(func)
|
2017-03-03 19:43:45 -05:00
|
|
|
def wrapper(self, *args, **kwargs):
|
|
|
|
if len(args) > 0:
|
2016-04-20 13:32:01 -04:00
|
|
|
raise TypeError("Method %s forces keyword arguments." % func.__name__)
|
2017-03-03 19:43:45 -05:00
|
|
|
self._input_kwargs = kwargs
|
|
|
|
return func(self, **kwargs)
|
2016-04-20 13:32:01 -04:00
|
|
|
return wrapper
|
|
|
|
|
|
|
|
|
2014-09-03 14:49:45 -04:00
|
|
|
# for back compatibility
|
2016-01-04 21:02:38 -05:00
|
|
|
from pyspark.sql import SQLContext, HiveContext, Row
|
2012-12-29 01:51:28 -05:00
|
|
|
|
2014-09-03 14:49:45 -04:00
|
|
|
__all__ = [
|
|
|
|
"SparkConf", "SparkContext", "SparkFiles", "RDD", "StorageLevel", "Broadcast",
|
|
|
|
"Accumulator", "AccumulatorParam", "MarshalSerializer", "PickleSerializer",
|
2016-12-20 18:51:21 -05:00
|
|
|
"StatusTracker", "SparkJobInfo", "SparkStageInfo", "Profiler", "BasicProfiler", "TaskContext",
|
2014-09-03 14:49:45 -04:00
|
|
|
]
|