2013-07-16 20:21:33 -04:00
|
|
|
#
|
|
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
|
|
# this work for additional information regarding copyright ownership.
|
|
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
# (the "License"); you may not use this file except in compliance with
|
|
|
|
# the License. You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
#
|
|
|
|
|
2013-01-16 22:15:14 -05:00
|
|
|
"""
|
|
|
|
Unit tests for PySpark; additional tests are implemented as doctests in
|
|
|
|
individual modules.
|
|
|
|
"""
|
2014-07-30 16:19:05 -04:00
|
|
|
from array import array
|
2013-11-29 02:44:56 -05:00
|
|
|
from fileinput import input
|
|
|
|
from glob import glob
|
2013-01-16 22:15:14 -05:00
|
|
|
import os
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
import re
|
2013-01-16 22:15:14 -05:00
|
|
|
import shutil
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
import subprocess
|
2013-01-23 13:36:18 -05:00
|
|
|
import sys
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
import tempfile
|
2013-01-16 22:15:14 -05:00
|
|
|
import time
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
import zipfile
|
2014-08-26 19:57:40 -04:00
|
|
|
import random
|
2014-09-12 21:42:50 -04:00
|
|
|
from platform import python_implementation
|
2013-01-16 22:15:14 -05:00
|
|
|
|
2014-08-11 14:54:09 -04:00
|
|
|
if sys.version_info[:2] <= (2, 6):
|
|
|
|
import unittest2 as unittest
|
|
|
|
else:
|
|
|
|
import unittest
|
|
|
|
|
|
|
|
|
2014-08-26 19:57:40 -04:00
|
|
|
from pyspark.conf import SparkConf
|
2013-01-16 22:15:14 -05:00
|
|
|
from pyspark.context import SparkContext
|
2013-01-23 13:36:18 -05:00
|
|
|
from pyspark.files import SparkFiles
|
2014-09-12 21:42:50 -04:00
|
|
|
from pyspark.serializers import read_int, BatchedSerializer, MarshalSerializer, PickleSerializer, \
|
|
|
|
CloudPickleSerializer
|
2014-08-26 19:57:40 -04:00
|
|
|
from pyspark.shuffle import Aggregator, InMemoryMerger, ExternalMerger, ExternalSorter
|
2014-09-19 18:33:42 -04:00
|
|
|
from pyspark.sql import SQLContext, IntegerType, Row
|
2014-09-14 01:31:21 -04:00
|
|
|
from pyspark import shuffle
|
2013-01-16 22:15:14 -05:00
|
|
|
|
2014-05-31 17:59:09 -04:00
|
|
|
_have_scipy = False
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
_have_numpy = False
|
2014-05-31 17:59:09 -04:00
|
|
|
try:
|
|
|
|
import scipy.sparse
|
|
|
|
_have_scipy = True
|
|
|
|
except:
|
|
|
|
# No SciPy, but that's okay, we'll skip those tests
|
|
|
|
pass
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
try:
|
|
|
|
import numpy as np
|
|
|
|
_have_numpy = True
|
|
|
|
except:
|
|
|
|
# No NumPy, but that's okay, we'll skip those tests
|
|
|
|
pass
|
2014-05-31 17:59:09 -04:00
|
|
|
|
2013-01-16 22:15:14 -05:00
|
|
|
|
2014-04-30 02:24:34 -04:00
|
|
|
SPARK_HOME = os.environ["SPARK_HOME"]
|
|
|
|
|
|
|
|
|
2014-07-25 01:53:47 -04:00
|
|
|
class TestMerger(unittest.TestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
self.N = 1 << 16
|
|
|
|
self.l = [i for i in xrange(self.N)]
|
|
|
|
self.data = zip(self.l, self.l)
|
2014-08-06 15:58:24 -04:00
|
|
|
self.agg = Aggregator(lambda x: [x],
|
|
|
|
lambda x, y: x.append(y) or x,
|
|
|
|
lambda x, y: x.extend(y) or x)
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
def test_in_memory(self):
|
|
|
|
m = InMemoryMerger(self.agg)
|
|
|
|
m.mergeValues(self.data)
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)))
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
m = InMemoryMerger(self.agg)
|
|
|
|
m.mergeCombiners(map(lambda (x, y): (x, [y]), self.data))
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)))
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
def test_small_dataset(self):
|
|
|
|
m = ExternalMerger(self.agg, 1000)
|
|
|
|
m.mergeValues(self.data)
|
|
|
|
self.assertEqual(m.spills, 0)
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)))
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
m = ExternalMerger(self.agg, 1000)
|
|
|
|
m.mergeCombiners(map(lambda (x, y): (x, [y]), self.data))
|
|
|
|
self.assertEqual(m.spills, 0)
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)))
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
def test_medium_dataset(self):
|
|
|
|
m = ExternalMerger(self.agg, 10)
|
|
|
|
m.mergeValues(self.data)
|
|
|
|
self.assertTrue(m.spills >= 1)
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)))
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
m = ExternalMerger(self.agg, 10)
|
|
|
|
m.mergeCombiners(map(lambda (x, y): (x, [y]), self.data * 3))
|
|
|
|
self.assertTrue(m.spills >= 1)
|
|
|
|
self.assertEqual(sum(sum(v) for k, v in m.iteritems()),
|
2014-08-06 15:58:24 -04:00
|
|
|
sum(xrange(self.N)) * 3)
|
2014-07-25 01:53:47 -04:00
|
|
|
|
|
|
|
def test_huge_dataset(self):
|
|
|
|
m = ExternalMerger(self.agg, 10)
|
|
|
|
m.mergeCombiners(map(lambda (k, v): (k, [str(v)]), self.data * 10))
|
|
|
|
self.assertTrue(m.spills >= 1)
|
|
|
|
self.assertEqual(sum(len(v) for k, v in m._recursive_merged_items(0)),
|
2014-08-06 15:58:24 -04:00
|
|
|
self.N * 10)
|
2014-07-25 01:53:47 -04:00
|
|
|
m._cleanup()
|
|
|
|
|
|
|
|
|
2014-08-26 19:57:40 -04:00
|
|
|
class TestSorter(unittest.TestCase):
|
|
|
|
def test_in_memory_sort(self):
|
|
|
|
l = range(1024)
|
|
|
|
random.shuffle(l)
|
|
|
|
sorter = ExternalSorter(1024)
|
|
|
|
self.assertEquals(sorted(l), list(sorter.sorted(l)))
|
|
|
|
self.assertEquals(sorted(l, reverse=True), list(sorter.sorted(l, reverse=True)))
|
|
|
|
self.assertEquals(sorted(l, key=lambda x: -x), list(sorter.sorted(l, key=lambda x: -x)))
|
|
|
|
self.assertEquals(sorted(l, key=lambda x: -x, reverse=True),
|
|
|
|
list(sorter.sorted(l, key=lambda x: -x, reverse=True)))
|
|
|
|
|
|
|
|
def test_external_sort(self):
|
|
|
|
l = range(1024)
|
|
|
|
random.shuffle(l)
|
|
|
|
sorter = ExternalSorter(1)
|
|
|
|
self.assertEquals(sorted(l), list(sorter.sorted(l)))
|
2014-09-14 01:31:21 -04:00
|
|
|
self.assertGreater(shuffle.DiskBytesSpilled, 0)
|
|
|
|
last = shuffle.DiskBytesSpilled
|
2014-08-26 19:57:40 -04:00
|
|
|
self.assertEquals(sorted(l, reverse=True), list(sorter.sorted(l, reverse=True)))
|
2014-09-14 01:31:21 -04:00
|
|
|
self.assertGreater(shuffle.DiskBytesSpilled, last)
|
|
|
|
last = shuffle.DiskBytesSpilled
|
2014-08-26 19:57:40 -04:00
|
|
|
self.assertEquals(sorted(l, key=lambda x: -x), list(sorter.sorted(l, key=lambda x: -x)))
|
2014-09-14 01:31:21 -04:00
|
|
|
self.assertGreater(shuffle.DiskBytesSpilled, last)
|
|
|
|
last = shuffle.DiskBytesSpilled
|
2014-08-26 19:57:40 -04:00
|
|
|
self.assertEquals(sorted(l, key=lambda x: -x, reverse=True),
|
|
|
|
list(sorter.sorted(l, key=lambda x: -x, reverse=True)))
|
2014-09-14 01:31:21 -04:00
|
|
|
self.assertGreater(shuffle.DiskBytesSpilled, last)
|
2014-08-26 19:57:40 -04:00
|
|
|
|
|
|
|
def test_external_sort_in_rdd(self):
|
|
|
|
conf = SparkConf().set("spark.python.worker.memory", "1m")
|
|
|
|
sc = SparkContext(conf=conf)
|
|
|
|
l = range(10240)
|
|
|
|
random.shuffle(l)
|
|
|
|
rdd = sc.parallelize(l, 10)
|
|
|
|
self.assertEquals(sorted(l), rdd.sortBy(lambda x: x).collect())
|
|
|
|
sc.stop()
|
|
|
|
|
|
|
|
|
2014-08-04 15:13:41 -04:00
|
|
|
class SerializationTestCase(unittest.TestCase):
|
|
|
|
|
|
|
|
def test_namedtuple(self):
|
|
|
|
from collections import namedtuple
|
|
|
|
from cPickle import dumps, loads
|
|
|
|
P = namedtuple("P", "x y")
|
|
|
|
p1 = P(1, 3)
|
|
|
|
p2 = loads(dumps(p1, 2))
|
|
|
|
self.assertEquals(p1, p2)
|
|
|
|
|
2014-09-12 21:42:50 -04:00
|
|
|
def test_itemgetter(self):
|
|
|
|
from operator import itemgetter
|
|
|
|
ser = CloudPickleSerializer()
|
|
|
|
d = range(10)
|
|
|
|
getter = itemgetter(1)
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
|
|
|
|
getter = itemgetter(0, 3)
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
|
|
|
|
def test_attrgetter(self):
|
|
|
|
from operator import attrgetter
|
|
|
|
ser = CloudPickleSerializer()
|
|
|
|
|
|
|
|
class C(object):
|
|
|
|
def __getattr__(self, item):
|
|
|
|
return item
|
|
|
|
d = C()
|
|
|
|
getter = attrgetter("a")
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
getter = attrgetter("a", "b")
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
|
|
|
|
d.e = C()
|
|
|
|
getter = attrgetter("e.a")
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
getter = attrgetter("e.a", "e.b")
|
|
|
|
getter2 = ser.loads(ser.dumps(getter))
|
|
|
|
self.assertEqual(getter(d), getter2(d))
|
|
|
|
|
|
|
|
# Regression test for SPARK-3415
|
2014-09-07 21:54:36 -04:00
|
|
|
def test_pickling_file_handles(self):
|
2014-09-12 21:42:50 -04:00
|
|
|
ser = CloudPickleSerializer()
|
2014-09-07 21:54:36 -04:00
|
|
|
out1 = sys.stderr
|
2014-09-12 21:42:50 -04:00
|
|
|
out2 = ser.loads(ser.dumps(out1))
|
2014-09-07 21:54:36 -04:00
|
|
|
self.assertEquals(out1, out2)
|
|
|
|
|
2014-09-24 16:00:05 -04:00
|
|
|
def test_func_globals(self):
|
|
|
|
|
|
|
|
class Unpicklable(object):
|
|
|
|
def __reduce__(self):
|
|
|
|
raise Exception("not picklable")
|
|
|
|
|
|
|
|
global exit
|
|
|
|
exit = Unpicklable()
|
|
|
|
|
|
|
|
ser = CloudPickleSerializer()
|
|
|
|
self.assertRaises(Exception, lambda: ser.dumps(exit))
|
|
|
|
|
|
|
|
def foo():
|
|
|
|
sys.exit(0)
|
|
|
|
|
|
|
|
self.assertTrue("exit" in foo.func_code.co_names)
|
|
|
|
ser.dumps(foo)
|
|
|
|
|
2014-09-07 21:54:36 -04:00
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
class PySparkTestCase(unittest.TestCase):
|
2013-01-16 22:15:14 -05:00
|
|
|
|
|
|
|
def setUp(self):
|
2013-01-23 13:36:18 -05:00
|
|
|
self._old_sys_path = list(sys.path)
|
2013-01-22 20:54:11 -05:00
|
|
|
class_name = self.__class__.__name__
|
2014-07-22 01:30:53 -04:00
|
|
|
self.sc = SparkContext('local[4]', class_name, batchSize=2)
|
2013-01-16 22:15:14 -05:00
|
|
|
|
|
|
|
def tearDown(self):
|
|
|
|
self.sc.stop()
|
2013-01-23 13:36:18 -05:00
|
|
|
sys.path = self._old_sys_path
|
2013-01-22 20:54:11 -05:00
|
|
|
|
2014-07-22 01:30:53 -04:00
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
class TestCheckpoint(PySparkTestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
PySparkTestCase.setUp(self)
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
self.checkpointDir = tempfile.NamedTemporaryFile(delete=False)
|
2013-01-22 20:54:11 -05:00
|
|
|
os.unlink(self.checkpointDir.name)
|
|
|
|
self.sc.setCheckpointDir(self.checkpointDir.name)
|
|
|
|
|
|
|
|
def tearDown(self):
|
|
|
|
PySparkTestCase.tearDown(self)
|
2013-01-20 18:38:11 -05:00
|
|
|
shutil.rmtree(self.checkpointDir.name)
|
2013-01-16 22:15:14 -05:00
|
|
|
|
|
|
|
def test_basic_checkpointing(self):
|
|
|
|
parCollection = self.sc.parallelize([1, 2, 3, 4])
|
|
|
|
flatMappedRDD = parCollection.flatMap(lambda x: range(1, x + 1))
|
|
|
|
|
|
|
|
self.assertFalse(flatMappedRDD.isCheckpointed())
|
2013-08-14 18:12:12 -04:00
|
|
|
self.assertTrue(flatMappedRDD.getCheckpointFile() is None)
|
2013-01-16 22:15:14 -05:00
|
|
|
|
|
|
|
flatMappedRDD.checkpoint()
|
|
|
|
result = flatMappedRDD.collect()
|
|
|
|
time.sleep(1) # 1 second
|
|
|
|
self.assertTrue(flatMappedRDD.isCheckpointed())
|
|
|
|
self.assertEqual(flatMappedRDD.collect(), result)
|
2013-12-24 17:01:13 -05:00
|
|
|
self.assertEqual("file:" + self.checkpointDir.name,
|
|
|
|
os.path.dirname(os.path.dirname(flatMappedRDD.getCheckpointFile())))
|
2013-01-16 22:15:14 -05:00
|
|
|
|
2013-01-20 16:59:45 -05:00
|
|
|
def test_checkpoint_and_restore(self):
|
|
|
|
parCollection = self.sc.parallelize([1, 2, 3, 4])
|
|
|
|
flatMappedRDD = parCollection.flatMap(lambda x: [x])
|
|
|
|
|
|
|
|
self.assertFalse(flatMappedRDD.isCheckpointed())
|
2013-08-14 18:12:12 -04:00
|
|
|
self.assertTrue(flatMappedRDD.getCheckpointFile() is None)
|
2013-01-20 16:59:45 -05:00
|
|
|
|
|
|
|
flatMappedRDD.checkpoint()
|
|
|
|
flatMappedRDD.count() # forces a checkpoint to be computed
|
|
|
|
time.sleep(1) # 1 second
|
|
|
|
|
2013-08-14 18:12:12 -04:00
|
|
|
self.assertTrue(flatMappedRDD.getCheckpointFile() is not None)
|
2013-11-05 20:52:39 -05:00
|
|
|
recovered = self.sc._checkpointFile(flatMappedRDD.getCheckpointFile(),
|
|
|
|
flatMappedRDD._jrdd_deserializer)
|
2013-01-20 16:59:45 -05:00
|
|
|
self.assertEquals([1, 2, 3, 4], recovered.collect())
|
|
|
|
|
2013-01-16 22:15:14 -05:00
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
class TestAddFile(PySparkTestCase):
|
|
|
|
|
|
|
|
def test_add_py_file(self):
|
|
|
|
# To ensure that we're actually testing addPyFile's effects, check that
|
|
|
|
# this job fails due to `userlibrary` not being on the Python path:
|
2014-07-29 03:15:45 -04:00
|
|
|
# disable logging in log4j temporarily
|
|
|
|
log4j = self.sc._jvm.org.apache.log4j
|
|
|
|
old_level = log4j.LogManager.getRootLogger().getLevel()
|
|
|
|
log4j.LogManager.getRootLogger().setLevel(log4j.Level.FATAL)
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
def func(x):
|
|
|
|
from userlibrary import UserClass
|
|
|
|
return UserClass().hello()
|
|
|
|
self.assertRaises(Exception,
|
|
|
|
self.sc.parallelize(range(2)).map(func).first)
|
2014-07-29 03:15:45 -04:00
|
|
|
log4j.LogManager.getRootLogger().setLevel(old_level)
|
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
# Add the file, so the job should now succeed:
|
|
|
|
path = os.path.join(SPARK_HOME, "python/test_support/userlibrary.py")
|
|
|
|
self.sc.addPyFile(path)
|
|
|
|
res = self.sc.parallelize(range(2)).map(func).first()
|
|
|
|
self.assertEqual("Hello World!", res)
|
|
|
|
|
2013-01-23 13:36:18 -05:00
|
|
|
def test_add_file_locally(self):
|
|
|
|
path = os.path.join(SPARK_HOME, "python/test_support/hello.txt")
|
|
|
|
self.sc.addFile(path)
|
|
|
|
download_path = SparkFiles.get("hello.txt")
|
|
|
|
self.assertNotEqual(path, download_path)
|
|
|
|
with open(download_path) as test_file:
|
|
|
|
self.assertEquals("Hello World!\n", test_file.readline())
|
|
|
|
|
|
|
|
def test_add_py_file_locally(self):
|
|
|
|
# To ensure that we're actually testing addPyFile's effects, check that
|
|
|
|
# this fails due to `userlibrary` not being on the Python path:
|
|
|
|
def func():
|
|
|
|
from userlibrary import UserClass
|
|
|
|
self.assertRaises(ImportError, func)
|
|
|
|
path = os.path.join(SPARK_HOME, "python/test_support/userlibrary.py")
|
|
|
|
self.sc.addFile(path)
|
|
|
|
from userlibrary import UserClass
|
|
|
|
self.assertEqual("Hello World!", UserClass().hello())
|
|
|
|
|
2013-08-15 19:01:19 -04:00
|
|
|
def test_add_egg_file_locally(self):
|
|
|
|
# To ensure that we're actually testing addPyFile's effects, check that
|
|
|
|
# this fails due to `userlibrary` not being on the Python path:
|
|
|
|
def func():
|
|
|
|
from userlib import UserClass
|
|
|
|
self.assertRaises(ImportError, func)
|
|
|
|
path = os.path.join(SPARK_HOME, "python/test_support/userlib-0.1-py2.7.egg")
|
|
|
|
self.sc.addPyFile(path)
|
|
|
|
from userlib import UserClass
|
|
|
|
self.assertEqual("Hello World from inside a package!", UserClass().hello())
|
|
|
|
|
2014-09-24 15:10:09 -04:00
|
|
|
def test_overwrite_system_module(self):
|
|
|
|
self.sc.addPyFile(os.path.join(SPARK_HOME, "python/test_support/SimpleHTTPServer.py"))
|
|
|
|
|
|
|
|
import SimpleHTTPServer
|
|
|
|
self.assertEqual("My Server", SimpleHTTPServer.__name__)
|
|
|
|
|
|
|
|
def func(x):
|
|
|
|
import SimpleHTTPServer
|
|
|
|
return SimpleHTTPServer.__name__
|
|
|
|
|
|
|
|
self.assertEqual(["My Server"], self.sc.parallelize(range(1)).map(func).collect())
|
|
|
|
|
2013-01-22 20:54:11 -05:00
|
|
|
|
2013-11-29 02:44:56 -05:00
|
|
|
class TestRDDFunctions(PySparkTestCase):
|
|
|
|
|
2014-09-06 19:12:29 -04:00
|
|
|
def test_id(self):
|
|
|
|
rdd = self.sc.parallelize(range(10))
|
|
|
|
id = rdd.id()
|
|
|
|
self.assertEqual(id, rdd.id())
|
|
|
|
rdd2 = rdd.map(str).filter(bool)
|
|
|
|
id2 = rdd2.id()
|
|
|
|
self.assertEqual(id + 1, id2)
|
|
|
|
self.assertEqual(id2, rdd2.id())
|
|
|
|
|
2014-07-28 01:54:43 -04:00
|
|
|
def test_failed_sparkcontext_creation(self):
|
|
|
|
# Regression test for SPARK-1550
|
|
|
|
self.sc.stop()
|
|
|
|
self.assertRaises(Exception, lambda: SparkContext("an-invalid-master-name"))
|
|
|
|
self.sc = SparkContext("local")
|
|
|
|
|
2013-11-29 02:44:56 -05:00
|
|
|
def test_save_as_textfile_with_unicode(self):
|
|
|
|
# Regression test for SPARK-970
|
|
|
|
x = u"\u00A1Hola, mundo!"
|
|
|
|
data = self.sc.parallelize([x])
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
tempFile = tempfile.NamedTemporaryFile(delete=True)
|
2013-11-29 02:44:56 -05:00
|
|
|
tempFile.close()
|
|
|
|
data.saveAsTextFile(tempFile.name)
|
|
|
|
raw_contents = ''.join(input(glob(tempFile.name + "/part-0000*")))
|
|
|
|
self.assertEqual(x, unicode(raw_contents.strip(), "utf-8"))
|
|
|
|
|
2014-08-18 16:58:35 -04:00
|
|
|
def test_save_as_textfile_with_utf8(self):
|
|
|
|
x = u"\u00A1Hola, mundo!"
|
|
|
|
data = self.sc.parallelize([x.encode("utf-8")])
|
|
|
|
tempFile = tempfile.NamedTemporaryFile(delete=True)
|
|
|
|
tempFile.close()
|
|
|
|
data.saveAsTextFile(tempFile.name)
|
|
|
|
raw_contents = ''.join(input(glob(tempFile.name + "/part-0000*")))
|
|
|
|
self.assertEqual(x, unicode(raw_contents.strip(), "utf-8"))
|
|
|
|
|
2014-01-23 16:05:59 -05:00
|
|
|
def test_transforming_cartesian_result(self):
|
|
|
|
# Regression test for SPARK-1034
|
|
|
|
rdd1 = self.sc.parallelize([1, 2])
|
|
|
|
rdd2 = self.sc.parallelize([3, 4])
|
|
|
|
cart = rdd1.cartesian(rdd2)
|
|
|
|
result = cart.map(lambda (x, y): x + y).collect()
|
|
|
|
|
2014-07-26 20:37:05 -04:00
|
|
|
def test_transforming_pickle_file(self):
|
|
|
|
# Regression test for SPARK-2601
|
|
|
|
data = self.sc.parallelize(["Hello", "World!"])
|
|
|
|
tempFile = tempfile.NamedTemporaryFile(delete=True)
|
|
|
|
tempFile.close()
|
|
|
|
data.saveAsPickleFile(tempFile.name)
|
|
|
|
pickled_file = self.sc.pickleFile(tempFile.name)
|
|
|
|
pickled_file.map(lambda x: x).collect()
|
|
|
|
|
2014-01-23 18:09:19 -05:00
|
|
|
def test_cartesian_on_textfile(self):
|
|
|
|
# Regression test for
|
|
|
|
path = os.path.join(SPARK_HOME, "python/test_support/hello.txt")
|
|
|
|
a = self.sc.textFile(path)
|
|
|
|
result = a.cartesian(a).collect()
|
|
|
|
(x, y) = result[0]
|
|
|
|
self.assertEqual("Hello World!", x.strip())
|
|
|
|
self.assertEqual("Hello World!", y.strip())
|
|
|
|
|
2014-01-23 21:10:16 -05:00
|
|
|
def test_deleting_input_files(self):
|
|
|
|
# Regression test for SPARK-1025
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
tempFile = tempfile.NamedTemporaryFile(delete=False)
|
2014-01-23 21:10:16 -05:00
|
|
|
tempFile.write("Hello World!")
|
|
|
|
tempFile.close()
|
|
|
|
data = self.sc.textFile(tempFile.name)
|
|
|
|
filtered_data = data.filter(lambda x: True)
|
|
|
|
self.assertEqual(1, filtered_data.count())
|
|
|
|
os.unlink(tempFile.name)
|
|
|
|
self.assertRaises(Exception, lambda: filtered_data.count())
|
|
|
|
|
2014-06-12 11:14:25 -04:00
|
|
|
def testAggregateByKey(self):
|
|
|
|
data = self.sc.parallelize([(1, 1), (1, 1), (3, 2), (5, 1), (5, 3)], 2)
|
2014-07-22 01:30:53 -04:00
|
|
|
|
2014-06-12 11:14:25 -04:00
|
|
|
def seqOp(x, y):
|
|
|
|
x.add(y)
|
|
|
|
return x
|
|
|
|
|
|
|
|
def combOp(x, y):
|
|
|
|
x |= y
|
|
|
|
return x
|
2014-07-22 01:30:53 -04:00
|
|
|
|
2014-06-12 11:14:25 -04:00
|
|
|
sets = dict(data.aggregateByKey(set(), seqOp, combOp).collect())
|
|
|
|
self.assertEqual(3, len(sets))
|
|
|
|
self.assertEqual(set([1]), sets[1])
|
|
|
|
self.assertEqual(set([2]), sets[3])
|
|
|
|
self.assertEqual(set([1, 3]), sets[5])
|
2013-11-29 02:44:56 -05:00
|
|
|
|
2014-07-29 04:02:18 -04:00
|
|
|
def test_itemgetter(self):
|
|
|
|
rdd = self.sc.parallelize([range(10)])
|
|
|
|
from operator import itemgetter
|
|
|
|
self.assertEqual([1], rdd.map(itemgetter(1)).collect())
|
|
|
|
self.assertEqual([(2, 3)], rdd.map(itemgetter(2, 3)).collect())
|
|
|
|
|
2014-08-04 15:13:41 -04:00
|
|
|
def test_namedtuple_in_rdd(self):
|
|
|
|
from collections import namedtuple
|
|
|
|
Person = namedtuple("Person", "id firstName lastName")
|
|
|
|
jon = Person(1, "Jon", "Doe")
|
|
|
|
jane = Person(2, "Jane", "Doe")
|
|
|
|
theDoes = self.sc.parallelize([jon, jane])
|
|
|
|
self.assertEquals([jon, jane], theDoes.collect())
|
|
|
|
|
2014-08-16 19:59:34 -04:00
|
|
|
def test_large_broadcast(self):
|
|
|
|
N = 100000
|
|
|
|
data = [[float(i) for i in range(300)] for i in range(N)]
|
|
|
|
bdata = self.sc.broadcast(data) # 270MB
|
|
|
|
m = self.sc.parallelize(range(1), 1).map(lambda x: len(bdata.value)).sum()
|
|
|
|
self.assertEquals(N, m)
|
|
|
|
|
2014-09-18 21:11:48 -04:00
|
|
|
def test_large_closure(self):
|
|
|
|
N = 1000000
|
|
|
|
data = [float(i) for i in xrange(N)]
|
|
|
|
m = self.sc.parallelize(range(1), 1).map(lambda x: len(data)).sum()
|
|
|
|
self.assertEquals(N, m)
|
|
|
|
|
2014-08-19 17:46:32 -04:00
|
|
|
def test_zip_with_different_serializers(self):
|
|
|
|
a = self.sc.parallelize(range(5))
|
|
|
|
b = self.sc.parallelize(range(100, 105))
|
|
|
|
self.assertEqual(a.zip(b).collect(), [(0, 100), (1, 101), (2, 102), (3, 103), (4, 104)])
|
|
|
|
a = a._reserialize(BatchedSerializer(PickleSerializer(), 2))
|
|
|
|
b = b._reserialize(MarshalSerializer())
|
|
|
|
self.assertEqual(a.zip(b).collect(), [(0, 100), (1, 101), (2, 102), (3, 103), (4, 104)])
|
|
|
|
|
|
|
|
def test_zip_with_different_number_of_items(self):
|
|
|
|
a = self.sc.parallelize(range(5), 2)
|
|
|
|
# different number of partitions
|
|
|
|
b = self.sc.parallelize(range(100, 106), 3)
|
|
|
|
self.assertRaises(ValueError, lambda: a.zip(b))
|
|
|
|
# different number of batched items in JVM
|
|
|
|
b = self.sc.parallelize(range(100, 104), 2)
|
|
|
|
self.assertRaises(Exception, lambda: a.zip(b).count())
|
|
|
|
# different number of items in one pair
|
|
|
|
b = self.sc.parallelize(range(100, 106), 2)
|
|
|
|
self.assertRaises(Exception, lambda: a.zip(b).count())
|
|
|
|
# same total number of items, but different distributions
|
|
|
|
a = self.sc.parallelize([2, 3], 2).flatMap(range)
|
|
|
|
b = self.sc.parallelize([3, 2], 2).flatMap(range)
|
|
|
|
self.assertEquals(a.count(), b.count())
|
|
|
|
self.assertRaises(Exception, lambda: a.zip(b).count())
|
|
|
|
|
2014-09-02 18:47:47 -04:00
|
|
|
def test_count_approx_distinct(self):
|
|
|
|
rdd = self.sc.parallelize(range(1000))
|
|
|
|
self.assertTrue(950 < rdd.countApproxDistinct(0.04) < 1050)
|
|
|
|
self.assertTrue(950 < rdd.map(float).countApproxDistinct(0.04) < 1050)
|
|
|
|
self.assertTrue(950 < rdd.map(str).countApproxDistinct(0.04) < 1050)
|
|
|
|
self.assertTrue(950 < rdd.map(lambda x: (x, -x)).countApproxDistinct(0.04) < 1050)
|
|
|
|
|
|
|
|
rdd = self.sc.parallelize([i % 20 for i in range(1000)], 7)
|
|
|
|
self.assertTrue(18 < rdd.countApproxDistinct() < 22)
|
|
|
|
self.assertTrue(18 < rdd.map(float).countApproxDistinct() < 22)
|
|
|
|
self.assertTrue(18 < rdd.map(str).countApproxDistinct() < 22)
|
|
|
|
self.assertTrue(18 < rdd.map(lambda x: (x, -x)).countApproxDistinct() < 22)
|
|
|
|
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.countApproxDistinct(0.00000001))
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.countApproxDistinct(0.5))
|
|
|
|
|
[SPARK-2871] [PySpark] add histgram() API
RDD.histogram(buckets)
Compute a histogram using the provided buckets. The buckets
are all open to the right except for the last which is closed.
e.g. [1,10,20,50] means the buckets are [1,10) [10,20) [20,50],
which means 1<=x<10, 10<=x<20, 20<=x<=50. And on the input of 1
and 50 we would have a histogram of 1,0,1.
If your histogram is evenly spaced (e.g. [0, 10, 20, 30]),
this can be switched from an O(log n) inseration to O(1) per
element(where n = # buckets).
Buckets must be sorted and not contain any duplicates, must be
at least two elements.
If `buckets` is a number, it will generates buckets which is
evenly spaced between the minimum and maximum of the RDD. For
example, if the min value is 0 and the max is 100, given buckets
as 2, the resulting buckets will be [0,50) [50,100]. buckets must
be at least 1 If the RDD contains infinity, NaN throws an exception
If the elements in RDD do not vary (max == min) always returns
a single bucket.
It will return an tuple of buckets and histogram.
>>> rdd = sc.parallelize(range(51))
>>> rdd.histogram(2)
([0, 25, 50], [25, 26])
>>> rdd.histogram([0, 5, 25, 50])
([0, 5, 25, 50], [5, 20, 26])
>>> rdd.histogram([0, 15, 30, 45, 60], True)
([0, 15, 30, 45, 60], [15, 15, 15, 6])
>>> rdd = sc.parallelize(["ab", "ac", "b", "bd", "ef"])
>>> rdd.histogram(("a", "b", "c"))
(('a', 'b', 'c'), [2, 2])
closes #122, it's duplicated.
Author: Davies Liu <davies.liu@gmail.com>
Closes #2091 from davies/histgram and squashes the following commits:
a322f8a [Davies Liu] fix deprecation of e.message
84e85fa [Davies Liu] remove evenBuckets, add more tests (including str)
d9a0722 [Davies Liu] address comments
0e18a2d [Davies Liu] add histgram() API
2014-08-26 16:04:30 -04:00
|
|
|
def test_histogram(self):
|
|
|
|
# empty
|
|
|
|
rdd = self.sc.parallelize([])
|
|
|
|
self.assertEquals([0], rdd.histogram([0, 10])[1])
|
|
|
|
self.assertEquals([0, 0], rdd.histogram([0, 4, 10])[1])
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram(1))
|
|
|
|
|
|
|
|
# out of range
|
|
|
|
rdd = self.sc.parallelize([10.01, -0.01])
|
|
|
|
self.assertEquals([0], rdd.histogram([0, 10])[1])
|
|
|
|
self.assertEquals([0, 0], rdd.histogram((0, 4, 10))[1])
|
|
|
|
|
|
|
|
# in range with one bucket
|
|
|
|
rdd = self.sc.parallelize(range(1, 5))
|
|
|
|
self.assertEquals([4], rdd.histogram([0, 10])[1])
|
|
|
|
self.assertEquals([3, 1], rdd.histogram([0, 4, 10])[1])
|
|
|
|
|
|
|
|
# in range with one bucket exact match
|
|
|
|
self.assertEquals([4], rdd.histogram([1, 4])[1])
|
|
|
|
|
|
|
|
# out of range with two buckets
|
|
|
|
rdd = self.sc.parallelize([10.01, -0.01])
|
|
|
|
self.assertEquals([0, 0], rdd.histogram([0, 5, 10])[1])
|
|
|
|
|
|
|
|
# out of range with two uneven buckets
|
|
|
|
rdd = self.sc.parallelize([10.01, -0.01])
|
|
|
|
self.assertEquals([0, 0], rdd.histogram([0, 4, 10])[1])
|
|
|
|
|
|
|
|
# in range with two buckets
|
|
|
|
rdd = self.sc.parallelize([1, 2, 3, 5, 6])
|
|
|
|
self.assertEquals([3, 2], rdd.histogram([0, 5, 10])[1])
|
|
|
|
|
|
|
|
# in range with two bucket and None
|
|
|
|
rdd = self.sc.parallelize([1, 2, 3, 5, 6, None, float('nan')])
|
|
|
|
self.assertEquals([3, 2], rdd.histogram([0, 5, 10])[1])
|
|
|
|
|
|
|
|
# in range with two uneven buckets
|
|
|
|
rdd = self.sc.parallelize([1, 2, 3, 5, 6])
|
|
|
|
self.assertEquals([3, 2], rdd.histogram([0, 5, 11])[1])
|
|
|
|
|
|
|
|
# mixed range with two uneven buckets
|
|
|
|
rdd = self.sc.parallelize([-0.01, 0.0, 1, 2, 3, 5, 6, 11.0, 11.01])
|
|
|
|
self.assertEquals([4, 3], rdd.histogram([0, 5, 11])[1])
|
|
|
|
|
|
|
|
# mixed range with four uneven buckets
|
|
|
|
rdd = self.sc.parallelize([-0.01, 0.0, 1, 2, 3, 5, 6, 11.01, 12.0, 199.0, 200.0, 200.1])
|
|
|
|
self.assertEquals([4, 2, 1, 3], rdd.histogram([0.0, 5.0, 11.0, 12.0, 200.0])[1])
|
|
|
|
|
|
|
|
# mixed range with uneven buckets and NaN
|
|
|
|
rdd = self.sc.parallelize([-0.01, 0.0, 1, 2, 3, 5, 6, 11.01, 12.0,
|
|
|
|
199.0, 200.0, 200.1, None, float('nan')])
|
|
|
|
self.assertEquals([4, 2, 1, 3], rdd.histogram([0.0, 5.0, 11.0, 12.0, 200.0])[1])
|
|
|
|
|
|
|
|
# out of range with infinite buckets
|
|
|
|
rdd = self.sc.parallelize([10.01, -0.01, float('nan'), float("inf")])
|
|
|
|
self.assertEquals([1, 2], rdd.histogram([float('-inf'), 0, float('inf')])[1])
|
|
|
|
|
|
|
|
# invalid buckets
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram([]))
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram([1]))
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram(0))
|
|
|
|
self.assertRaises(TypeError, lambda: rdd.histogram({}))
|
|
|
|
|
|
|
|
# without buckets
|
|
|
|
rdd = self.sc.parallelize(range(1, 5))
|
|
|
|
self.assertEquals(([1, 4], [4]), rdd.histogram(1))
|
|
|
|
|
|
|
|
# without buckets single element
|
|
|
|
rdd = self.sc.parallelize([1])
|
|
|
|
self.assertEquals(([1, 1], [1]), rdd.histogram(1))
|
|
|
|
|
|
|
|
# without bucket no range
|
|
|
|
rdd = self.sc.parallelize([1] * 4)
|
|
|
|
self.assertEquals(([1, 1], [4]), rdd.histogram(1))
|
|
|
|
|
|
|
|
# without buckets basic two
|
|
|
|
rdd = self.sc.parallelize(range(1, 5))
|
|
|
|
self.assertEquals(([1, 2.5, 4], [2, 2]), rdd.histogram(2))
|
|
|
|
|
|
|
|
# without buckets with more requested than elements
|
|
|
|
rdd = self.sc.parallelize([1, 2])
|
|
|
|
buckets = [1 + 0.2 * i for i in range(6)]
|
|
|
|
hist = [1, 0, 0, 0, 1]
|
|
|
|
self.assertEquals((buckets, hist), rdd.histogram(5))
|
|
|
|
|
|
|
|
# invalid RDDs
|
|
|
|
rdd = self.sc.parallelize([1, float('inf')])
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram(2))
|
|
|
|
rdd = self.sc.parallelize([float('nan')])
|
|
|
|
self.assertRaises(ValueError, lambda: rdd.histogram(2))
|
|
|
|
|
|
|
|
# string
|
|
|
|
rdd = self.sc.parallelize(["ab", "ac", "b", "bd", "ef"], 2)
|
|
|
|
self.assertEquals([2, 2], rdd.histogram(["a", "b", "c"])[1])
|
|
|
|
self.assertEquals((["ab", "ef"], [5]), rdd.histogram(1))
|
|
|
|
self.assertRaises(TypeError, lambda: rdd.histogram(2))
|
|
|
|
|
|
|
|
# mixed RDD
|
|
|
|
rdd = self.sc.parallelize([1, 4, "ab", "ac", "b"], 2)
|
|
|
|
self.assertEquals([1, 1], rdd.histogram([0, 4, 10])[1])
|
|
|
|
self.assertEquals([2, 1], rdd.histogram(["a", "b", "c"])[1])
|
|
|
|
self.assertEquals(([1, "b"], [5]), rdd.histogram(1))
|
|
|
|
self.assertRaises(TypeError, lambda: rdd.histogram(2))
|
|
|
|
|
2014-09-08 14:20:00 -04:00
|
|
|
def test_repartitionAndSortWithinPartitions(self):
|
|
|
|
rdd = self.sc.parallelize([(0, 5), (3, 8), (2, 6), (0, 8), (3, 8), (1, 3)], 2)
|
|
|
|
|
|
|
|
repartitioned = rdd.repartitionAndSortWithinPartitions(2, lambda key: key % 2)
|
|
|
|
partitions = repartitioned.glom().collect()
|
|
|
|
self.assertEquals(partitions[0], [(0, 5), (0, 8), (2, 6)])
|
|
|
|
self.assertEquals(partitions[1], [(1, 3), (3, 8), (3, 8)])
|
|
|
|
|
2014-09-16 14:39:57 -04:00
|
|
|
def test_distinct(self):
|
|
|
|
rdd = self.sc.parallelize((1, 2, 3)*10, 10)
|
|
|
|
self.assertEquals(rdd.getNumPartitions(), 10)
|
|
|
|
self.assertEquals(rdd.distinct().count(), 3)
|
|
|
|
result = rdd.distinct(5)
|
|
|
|
self.assertEquals(result.getNumPartitions(), 5)
|
|
|
|
self.assertEquals(result.count(), 3)
|
|
|
|
|
2014-07-22 01:30:53 -04:00
|
|
|
|
[SPARK-3478] [PySpark] Profile the Python tasks
This patch add profiling support for PySpark, it will show the profiling results
before the driver exits, here is one example:
```
============================================================
Profile of RDD<id=3>
============================================================
5146507 function calls (5146487 primitive calls) in 71.094 seconds
Ordered by: internal time, cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
5144576 68.331 0.000 68.331 0.000 statcounter.py:44(merge)
20 2.735 0.137 71.071 3.554 statcounter.py:33(__init__)
20 0.017 0.001 0.017 0.001 {cPickle.dumps}
1024 0.003 0.000 0.003 0.000 t.py:16(<lambda>)
20 0.001 0.000 0.001 0.000 {reduce}
21 0.001 0.000 0.001 0.000 {cPickle.loads}
20 0.001 0.000 0.001 0.000 copy_reg.py:95(_slotnames)
41 0.001 0.000 0.001 0.000 serializers.py:461(read_int)
40 0.001 0.000 0.002 0.000 serializers.py:179(_batched)
62 0.000 0.000 0.000 0.000 {method 'read' of 'file' objects}
20 0.000 0.000 71.072 3.554 rdd.py:863(<lambda>)
20 0.000 0.000 0.001 0.000 serializers.py:198(load_stream)
40/20 0.000 0.000 71.072 3.554 rdd.py:2093(pipeline_func)
41 0.000 0.000 0.002 0.000 serializers.py:130(load_stream)
40 0.000 0.000 71.072 1.777 rdd.py:304(func)
20 0.000 0.000 71.094 3.555 worker.py:82(process)
```
Also, use can show profile result manually by `sc.show_profiles()` or dump it into disk
by `sc.dump_profiles(path)`, such as
```python
>>> sc._conf.set("spark.python.profile", "true")
>>> rdd = sc.parallelize(range(100)).map(str)
>>> rdd.count()
100
>>> sc.show_profiles()
============================================================
Profile of RDD<id=1>
============================================================
284 function calls (276 primitive calls) in 0.001 seconds
Ordered by: internal time, cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
4 0.000 0.000 0.000 0.000 serializers.py:198(load_stream)
4 0.000 0.000 0.000 0.000 {reduce}
12/4 0.000 0.000 0.001 0.000 rdd.py:2092(pipeline_func)
4 0.000 0.000 0.000 0.000 {cPickle.loads}
4 0.000 0.000 0.000 0.000 {cPickle.dumps}
104 0.000 0.000 0.000 0.000 rdd.py:852(<genexpr>)
8 0.000 0.000 0.000 0.000 serializers.py:461(read_int)
12 0.000 0.000 0.000 0.000 rdd.py:303(func)
```
The profiling is disabled by default, can be enabled by "spark.python.profile=true".
Also, users can dump the results into disks automatically for future analysis, by "spark.python.profile.dump=path_to_dump"
This is bugfix of #2351 cc JoshRosen
Author: Davies Liu <davies.liu@gmail.com>
Closes #2556 from davies/profiler and squashes the following commits:
e68df5a [Davies Liu] Merge branch 'master' of github.com:apache/spark into profiler
858e74c [Davies Liu] compatitable with python 2.6
7ef2aa0 [Davies Liu] bugfix, add tests for show_profiles and dump_profiles()
2b0daf2 [Davies Liu] fix docs
7a56c24 [Davies Liu] bugfix
cba9463 [Davies Liu] move show_profiles and dump_profiles to SparkContext
fb9565b [Davies Liu] Merge branch 'master' of github.com:apache/spark into profiler
116d52a [Davies Liu] Merge branch 'master' of github.com:apache/spark into profiler
09d02c3 [Davies Liu] Merge branch 'master' into profiler
c23865c [Davies Liu] Merge branch 'master' into profiler
15d6f18 [Davies Liu] add docs for two configs
dadee1a [Davies Liu] add docs string and clear profiles after show or dump
4f8309d [Davies Liu] address comment, add tests
0a5b6eb [Davies Liu] fix Python UDF
4b20494 [Davies Liu] add profile for python
2014-09-30 21:24:57 -04:00
|
|
|
class TestProfiler(PySparkTestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
self._old_sys_path = list(sys.path)
|
|
|
|
class_name = self.__class__.__name__
|
|
|
|
conf = SparkConf().set("spark.python.profile", "true")
|
|
|
|
self.sc = SparkContext('local[4]', class_name, batchSize=2, conf=conf)
|
|
|
|
|
|
|
|
def test_profiler(self):
|
|
|
|
|
|
|
|
def heavy_foo(x):
|
|
|
|
for i in range(1 << 20):
|
|
|
|
x = 1
|
|
|
|
rdd = self.sc.parallelize(range(100))
|
|
|
|
rdd.foreach(heavy_foo)
|
|
|
|
profiles = self.sc._profile_stats
|
|
|
|
self.assertEqual(1, len(profiles))
|
|
|
|
id, acc, _ = profiles[0]
|
|
|
|
stats = acc.value
|
|
|
|
self.assertTrue(stats is not None)
|
|
|
|
width, stat_list = stats.get_print_list([])
|
|
|
|
func_names = [func_name for fname, n, func_name in stat_list]
|
|
|
|
self.assertTrue("heavy_foo" in func_names)
|
|
|
|
|
|
|
|
self.sc.show_profiles()
|
|
|
|
d = tempfile.gettempdir()
|
|
|
|
self.sc.dump_profiles(d)
|
|
|
|
self.assertTrue("rdd_%d.pstats" % id in os.listdir(d))
|
|
|
|
|
|
|
|
|
2014-09-03 22:08:39 -04:00
|
|
|
class TestSQL(PySparkTestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
PySparkTestCase.setUp(self)
|
|
|
|
self.sqlCtx = SQLContext(self.sc)
|
|
|
|
|
|
|
|
def test_udf(self):
|
|
|
|
self.sqlCtx.registerFunction("twoArgs", lambda x, y: len(x) + y, IntegerType())
|
|
|
|
[row] = self.sqlCtx.sql("SELECT twoArgs('test', 1)").collect()
|
|
|
|
self.assertEqual(row[0], 5)
|
|
|
|
|
|
|
|
def test_broadcast_in_udf(self):
|
|
|
|
bar = {"a": "aa", "b": "bb", "c": "abc"}
|
|
|
|
foo = self.sc.broadcast(bar)
|
|
|
|
self.sqlCtx.registerFunction("MYUDF", lambda x: foo.value[x] if x else '')
|
|
|
|
[res] = self.sqlCtx.sql("SELECT MYUDF('c')").collect()
|
|
|
|
self.assertEqual("abc", res[0])
|
|
|
|
[res] = self.sqlCtx.sql("SELECT MYUDF('')").collect()
|
|
|
|
self.assertEqual("", res[0])
|
|
|
|
|
2014-09-12 22:05:39 -04:00
|
|
|
def test_basic_functions(self):
|
|
|
|
rdd = self.sc.parallelize(['{"foo":"bar"}', '{"foo":"baz"}'])
|
|
|
|
srdd = self.sqlCtx.jsonRDD(rdd)
|
|
|
|
srdd.count()
|
|
|
|
srdd.collect()
|
|
|
|
srdd.schemaString()
|
|
|
|
srdd.schema()
|
|
|
|
|
|
|
|
# cache and checkpoint
|
|
|
|
self.assertFalse(srdd.is_cached)
|
|
|
|
srdd.persist()
|
|
|
|
srdd.unpersist()
|
|
|
|
srdd.cache()
|
|
|
|
self.assertTrue(srdd.is_cached)
|
|
|
|
self.assertFalse(srdd.isCheckpointed())
|
|
|
|
self.assertEqual(None, srdd.getCheckpointFile())
|
|
|
|
|
|
|
|
srdd = srdd.coalesce(2, True)
|
|
|
|
srdd = srdd.repartition(3)
|
|
|
|
srdd = srdd.distinct()
|
|
|
|
srdd.intersection(srdd)
|
|
|
|
self.assertEqual(2, srdd.count())
|
|
|
|
|
|
|
|
srdd.registerTempTable("temp")
|
|
|
|
srdd = self.sqlCtx.sql("select foo from temp")
|
|
|
|
srdd.count()
|
|
|
|
srdd.collect()
|
|
|
|
|
2014-09-16 14:39:57 -04:00
|
|
|
def test_distinct(self):
|
|
|
|
rdd = self.sc.parallelize(['{"a": 1}', '{"b": 2}', '{"c": 3}']*10, 10)
|
|
|
|
srdd = self.sqlCtx.jsonRDD(rdd)
|
|
|
|
self.assertEquals(srdd.getNumPartitions(), 10)
|
|
|
|
self.assertEquals(srdd.distinct().count(), 3)
|
|
|
|
result = srdd.distinct(5)
|
|
|
|
self.assertEquals(result.getNumPartitions(), 5)
|
|
|
|
self.assertEquals(result.count(), 3)
|
|
|
|
|
2014-09-19 18:33:42 -04:00
|
|
|
def test_apply_schema_to_row(self):
|
|
|
|
srdd = self.sqlCtx.jsonRDD(self.sc.parallelize(["""{"a":2}"""]))
|
|
|
|
srdd2 = self.sqlCtx.applySchema(srdd.map(lambda x: x), srdd.schema())
|
|
|
|
self.assertEqual(srdd.collect(), srdd2.collect())
|
|
|
|
|
|
|
|
rdd = self.sc.parallelize(range(10)).map(lambda x: Row(a=x))
|
|
|
|
srdd3 = self.sqlCtx.applySchema(rdd, srdd.schema())
|
|
|
|
self.assertEqual(10, srdd3.count())
|
|
|
|
|
2014-09-27 15:21:37 -04:00
|
|
|
def test_serialize_nested_array_and_map(self):
|
|
|
|
d = [Row(l=[Row(a=1, b='s')], d={"key": Row(c=1.0, d="2")})]
|
|
|
|
rdd = self.sc.parallelize(d)
|
|
|
|
srdd = self.sqlCtx.inferSchema(rdd)
|
|
|
|
row = srdd.first()
|
|
|
|
self.assertEqual(1, len(row.l))
|
|
|
|
self.assertEqual(1, row.l[0].a)
|
|
|
|
self.assertEqual("2", row.d["key"].d)
|
|
|
|
|
|
|
|
l = srdd.map(lambda x: x.l).first()
|
|
|
|
self.assertEqual(1, len(l))
|
|
|
|
self.assertEqual('s', l[0].b)
|
|
|
|
|
|
|
|
d = srdd.map(lambda x: x.d).first()
|
|
|
|
self.assertEqual(1, len(d))
|
|
|
|
self.assertEqual(1.0, d["key"].c)
|
|
|
|
|
|
|
|
row = srdd.map(lambda x: x.d["key"]).first()
|
|
|
|
self.assertEqual(1.0, row.c)
|
|
|
|
self.assertEqual("2", row.d)
|
|
|
|
|
2014-09-03 22:08:39 -04:00
|
|
|
|
2013-02-01 03:25:19 -05:00
|
|
|
class TestIO(PySparkTestCase):
|
|
|
|
|
|
|
|
def test_stdout_redirection(self):
|
|
|
|
import subprocess
|
2014-07-22 01:30:53 -04:00
|
|
|
|
2013-02-01 03:25:19 -05:00
|
|
|
def func(x):
|
|
|
|
subprocess.check_call('ls', shell=True)
|
|
|
|
self.sc.parallelize([1]).foreach(func)
|
|
|
|
|
|
|
|
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
class TestInputFormat(PySparkTestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
PySparkTestCase.setUp(self)
|
|
|
|
self.tempdir = tempfile.NamedTemporaryFile(delete=False)
|
|
|
|
os.unlink(self.tempdir.name)
|
|
|
|
self.sc._jvm.WriteInputFormatTestDataGenerator.generateData(self.tempdir.name, self.sc._jsc)
|
|
|
|
|
|
|
|
def tearDown(self):
|
|
|
|
PySparkTestCase.tearDown(self)
|
|
|
|
shutil.rmtree(self.tempdir.name)
|
|
|
|
|
|
|
|
def test_sequencefiles(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
ints = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
ei = [(1, u'aa'), (1, u'aa'), (2, u'aa'), (2, u'bb'), (2, u'bb'), (3, u'cc')]
|
|
|
|
self.assertEqual(ints, ei)
|
|
|
|
|
|
|
|
doubles = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfdouble/",
|
|
|
|
"org.apache.hadoop.io.DoubleWritable",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
ed = [(1.0, u'aa'), (1.0, u'aa'), (2.0, u'aa'), (2.0, u'bb'), (2.0, u'bb'), (3.0, u'cc')]
|
|
|
|
self.assertEqual(doubles, ed)
|
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
bytes = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfbytes/",
|
2014-08-06 15:58:24 -04:00
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.BytesWritable").collect())
|
2014-07-30 16:19:05 -04:00
|
|
|
ebs = [(1, bytearray('aa', 'utf-8')),
|
|
|
|
(1, bytearray('aa', 'utf-8')),
|
|
|
|
(2, bytearray('aa', 'utf-8')),
|
|
|
|
(2, bytearray('bb', 'utf-8')),
|
|
|
|
(2, bytearray('bb', 'utf-8')),
|
|
|
|
(3, bytearray('cc', 'utf-8'))]
|
|
|
|
self.assertEqual(bytes, ebs)
|
|
|
|
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
text = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sftext/",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
et = [(u'1', u'aa'),
|
|
|
|
(u'1', u'aa'),
|
|
|
|
(u'2', u'aa'),
|
|
|
|
(u'2', u'bb'),
|
|
|
|
(u'2', u'bb'),
|
|
|
|
(u'3', u'cc')]
|
|
|
|
self.assertEqual(text, et)
|
|
|
|
|
|
|
|
bools = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfbool/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.BooleanWritable").collect())
|
|
|
|
eb = [(1, False), (1, True), (2, False), (2, False), (2, True), (3, True)]
|
|
|
|
self.assertEqual(bools, eb)
|
|
|
|
|
|
|
|
nulls = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfnull/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.BooleanWritable").collect())
|
|
|
|
en = [(1, None), (1, None), (2, None), (2, None), (2, None), (3, None)]
|
|
|
|
self.assertEqual(nulls, en)
|
|
|
|
|
|
|
|
maps = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfmap/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.MapWritable").collect())
|
2014-07-30 16:19:05 -04:00
|
|
|
em = [(1, {}),
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
(1, {3.0: u'bb'}),
|
|
|
|
(2, {1.0: u'aa'}),
|
|
|
|
(2, {1.0: u'cc'}),
|
|
|
|
(3, {2.0: u'dd'})]
|
|
|
|
self.assertEqual(maps, em)
|
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
# arrays get pickled to tuples by default
|
|
|
|
tuples = sorted(self.sc.sequenceFile(
|
|
|
|
basepath + "/sftestdata/sfarray/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.spark.api.python.DoubleArrayWritable").collect())
|
|
|
|
et = [(1, ()),
|
|
|
|
(2, (3.0, 4.0, 5.0)),
|
|
|
|
(3, (4.0, 5.0, 6.0))]
|
|
|
|
self.assertEqual(tuples, et)
|
|
|
|
|
|
|
|
# with custom converters, primitive arrays can stay as arrays
|
|
|
|
arrays = sorted(self.sc.sequenceFile(
|
|
|
|
basepath + "/sftestdata/sfarray/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.spark.api.python.DoubleArrayWritable",
|
|
|
|
valueConverter="org.apache.spark.api.python.WritableToDoubleArrayConverter").collect())
|
|
|
|
ea = [(1, array('d')),
|
|
|
|
(2, array('d', [3.0, 4.0, 5.0])),
|
|
|
|
(3, array('d', [4.0, 5.0, 6.0]))]
|
|
|
|
self.assertEqual(arrays, ea)
|
|
|
|
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
clazz = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfclass/",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
"org.apache.spark.api.python.TestWritable").collect())
|
|
|
|
ec = (u'1',
|
|
|
|
{u'__class__': u'org.apache.spark.api.python.TestWritable',
|
|
|
|
u'double': 54.0, u'int': 123, u'str': u'test1'})
|
|
|
|
self.assertEqual(clazz[0], ec)
|
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
unbatched_clazz = sorted(self.sc.sequenceFile(basepath + "/sftestdata/sfclass/",
|
2014-08-06 15:58:24 -04:00
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
"org.apache.spark.api.python.TestWritable",
|
|
|
|
batchSize=1).collect())
|
2014-07-30 16:19:05 -04:00
|
|
|
self.assertEqual(unbatched_clazz[0], ec)
|
|
|
|
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
def test_oldhadoop(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
ints = sorted(self.sc.hadoopFile(basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
ei = [(1, u'aa'), (1, u'aa'), (2, u'aa'), (2, u'bb'), (2, u'bb'), (3, u'cc')]
|
|
|
|
self.assertEqual(ints, ei)
|
|
|
|
|
|
|
|
hellopath = os.path.join(SPARK_HOME, "python/test_support/hello.txt")
|
2014-08-06 15:58:24 -04:00
|
|
|
oldconf = {"mapred.input.dir": hellopath}
|
2014-07-30 16:19:05 -04:00
|
|
|
hello = self.sc.hadoopRDD("org.apache.hadoop.mapred.TextInputFormat",
|
|
|
|
"org.apache.hadoop.io.LongWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
conf=oldconf).collect()
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
result = [(0, u'Hello World!')]
|
|
|
|
self.assertEqual(hello, result)
|
|
|
|
|
|
|
|
def test_newhadoop(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
ints = sorted(self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
ei = [(1, u'aa'), (1, u'aa'), (2, u'aa'), (2, u'bb'), (2, u'bb'), (3, u'cc')]
|
|
|
|
self.assertEqual(ints, ei)
|
|
|
|
|
|
|
|
hellopath = os.path.join(SPARK_HOME, "python/test_support/hello.txt")
|
2014-08-06 15:58:24 -04:00
|
|
|
newconf = {"mapred.input.dir": hellopath}
|
2014-07-30 16:19:05 -04:00
|
|
|
hello = self.sc.newAPIHadoopRDD("org.apache.hadoop.mapreduce.lib.input.TextInputFormat",
|
|
|
|
"org.apache.hadoop.io.LongWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
conf=newconf).collect()
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
result = [(0, u'Hello World!')]
|
|
|
|
self.assertEqual(hello, result)
|
|
|
|
|
|
|
|
def test_newolderror(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
self.assertRaises(Exception, lambda: self.sc.hadoopFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text"))
|
|
|
|
|
|
|
|
self.assertRaises(Exception, lambda: self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text"))
|
|
|
|
|
|
|
|
def test_bad_inputs(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
self.assertRaises(Exception, lambda: self.sc.sequenceFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.io.NotValidWritable",
|
|
|
|
"org.apache.hadoop.io.Text"))
|
|
|
|
self.assertRaises(Exception, lambda: self.sc.hadoopFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapred.NotValidInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text"))
|
|
|
|
self.assertRaises(Exception, lambda: self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/sftestdata/sfint/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.NotValidInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text"))
|
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
def test_converters(self):
|
|
|
|
# use of custom converters
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
basepath = self.tempdir.name
|
|
|
|
maps = sorted(self.sc.sequenceFile(
|
|
|
|
basepath + "/sftestdata/sfmap/",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.MapWritable",
|
2014-07-30 16:19:05 -04:00
|
|
|
keyConverter="org.apache.spark.api.python.TestInputKeyConverter",
|
|
|
|
valueConverter="org.apache.spark.api.python.TestInputValueConverter").collect())
|
|
|
|
em = [(u'\x01', []),
|
|
|
|
(u'\x01', [3.0]),
|
|
|
|
(u'\x02', [1.0]),
|
|
|
|
(u'\x02', [1.0]),
|
|
|
|
(u'\x03', [2.0])]
|
|
|
|
self.assertEqual(maps, em)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
class TestOutputFormat(PySparkTestCase):
|
|
|
|
|
|
|
|
def setUp(self):
|
|
|
|
PySparkTestCase.setUp(self)
|
|
|
|
self.tempdir = tempfile.NamedTemporaryFile(delete=False)
|
|
|
|
os.unlink(self.tempdir.name)
|
|
|
|
|
|
|
|
def tearDown(self):
|
|
|
|
PySparkTestCase.tearDown(self)
|
|
|
|
shutil.rmtree(self.tempdir.name, ignore_errors=True)
|
|
|
|
|
|
|
|
def test_sequencefiles(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
ei = [(1, u'aa'), (1, u'aa'), (2, u'aa'), (2, u'bb'), (2, u'bb'), (3, u'cc')]
|
|
|
|
self.sc.parallelize(ei).saveAsSequenceFile(basepath + "/sfint/")
|
|
|
|
ints = sorted(self.sc.sequenceFile(basepath + "/sfint/").collect())
|
|
|
|
self.assertEqual(ints, ei)
|
|
|
|
|
|
|
|
ed = [(1.0, u'aa'), (1.0, u'aa'), (2.0, u'aa'), (2.0, u'bb'), (2.0, u'bb'), (3.0, u'cc')]
|
|
|
|
self.sc.parallelize(ed).saveAsSequenceFile(basepath + "/sfdouble/")
|
|
|
|
doubles = sorted(self.sc.sequenceFile(basepath + "/sfdouble/").collect())
|
|
|
|
self.assertEqual(doubles, ed)
|
|
|
|
|
|
|
|
ebs = [(1, bytearray(b'\x00\x07spam\x08')), (2, bytearray(b'\x00\x07spam\x08'))]
|
|
|
|
self.sc.parallelize(ebs).saveAsSequenceFile(basepath + "/sfbytes/")
|
|
|
|
bytes = sorted(self.sc.sequenceFile(basepath + "/sfbytes/").collect())
|
|
|
|
self.assertEqual(bytes, ebs)
|
|
|
|
|
|
|
|
et = [(u'1', u'aa'),
|
|
|
|
(u'2', u'bb'),
|
|
|
|
(u'3', u'cc')]
|
|
|
|
self.sc.parallelize(et).saveAsSequenceFile(basepath + "/sftext/")
|
|
|
|
text = sorted(self.sc.sequenceFile(basepath + "/sftext/").collect())
|
|
|
|
self.assertEqual(text, et)
|
|
|
|
|
|
|
|
eb = [(1, False), (1, True), (2, False), (2, False), (2, True), (3, True)]
|
|
|
|
self.sc.parallelize(eb).saveAsSequenceFile(basepath + "/sfbool/")
|
|
|
|
bools = sorted(self.sc.sequenceFile(basepath + "/sfbool/").collect())
|
|
|
|
self.assertEqual(bools, eb)
|
|
|
|
|
|
|
|
en = [(1, None), (1, None), (2, None), (2, None), (2, None), (3, None)]
|
|
|
|
self.sc.parallelize(en).saveAsSequenceFile(basepath + "/sfnull/")
|
|
|
|
nulls = sorted(self.sc.sequenceFile(basepath + "/sfnull/").collect())
|
|
|
|
self.assertEqual(nulls, en)
|
|
|
|
|
|
|
|
em = [(1, {}),
|
|
|
|
(1, {3.0: u'bb'}),
|
|
|
|
(2, {1.0: u'aa'}),
|
|
|
|
(2, {1.0: u'cc'}),
|
|
|
|
(3, {2.0: u'dd'})]
|
|
|
|
self.sc.parallelize(em).saveAsSequenceFile(basepath + "/sfmap/")
|
|
|
|
maps = sorted(self.sc.sequenceFile(basepath + "/sfmap/").collect())
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
self.assertEqual(maps, em)
|
|
|
|
|
2014-07-30 16:19:05 -04:00
|
|
|
def test_oldhadoop(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
dict_data = [(1, {}),
|
2014-08-06 15:58:24 -04:00
|
|
|
(1, {"row1": 1.0}),
|
|
|
|
(2, {"row2": 2.0})]
|
2014-07-30 16:19:05 -04:00
|
|
|
self.sc.parallelize(dict_data).saveAsHadoopFile(
|
|
|
|
basepath + "/oldhadoop/",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileOutputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.MapWritable")
|
|
|
|
result = sorted(self.sc.hadoopFile(
|
|
|
|
basepath + "/oldhadoop/",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.MapWritable").collect())
|
|
|
|
self.assertEqual(result, dict_data)
|
|
|
|
|
|
|
|
conf = {
|
2014-08-06 15:58:24 -04:00
|
|
|
"mapred.output.format.class": "org.apache.hadoop.mapred.SequenceFileOutputFormat",
|
|
|
|
"mapred.output.key.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.value.class": "org.apache.hadoop.io.MapWritable",
|
|
|
|
"mapred.output.dir": basepath + "/olddataset/"
|
|
|
|
}
|
2014-07-30 16:19:05 -04:00
|
|
|
self.sc.parallelize(dict_data).saveAsHadoopDataset(conf)
|
2014-08-06 15:58:24 -04:00
|
|
|
input_conf = {"mapred.input.dir": basepath + "/olddataset/"}
|
2014-07-30 16:19:05 -04:00
|
|
|
old_dataset = sorted(self.sc.hadoopRDD(
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.MapWritable",
|
|
|
|
conf=input_conf).collect())
|
|
|
|
self.assertEqual(old_dataset, dict_data)
|
|
|
|
|
|
|
|
def test_newhadoop(self):
|
2014-09-12 21:42:50 -04:00
|
|
|
basepath = self.tempdir.name
|
|
|
|
data = [(1, ""),
|
|
|
|
(1, "a"),
|
|
|
|
(2, "bcdf")]
|
|
|
|
self.sc.parallelize(data).saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/newhadoop/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text")
|
|
|
|
result = sorted(self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/newhadoop/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text").collect())
|
|
|
|
self.assertEqual(result, data)
|
|
|
|
|
|
|
|
conf = {
|
|
|
|
"mapreduce.outputformat.class":
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
"mapred.output.key.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.value.class": "org.apache.hadoop.io.Text",
|
|
|
|
"mapred.output.dir": basepath + "/newdataset/"
|
|
|
|
}
|
|
|
|
self.sc.parallelize(data).saveAsNewAPIHadoopDataset(conf)
|
|
|
|
input_conf = {"mapred.input.dir": basepath + "/newdataset/"}
|
|
|
|
new_dataset = sorted(self.sc.newAPIHadoopRDD(
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
conf=input_conf).collect())
|
|
|
|
self.assertEqual(new_dataset, data)
|
|
|
|
|
|
|
|
def test_newhadoop_with_array(self):
|
2014-07-30 16:19:05 -04:00
|
|
|
basepath = self.tempdir.name
|
|
|
|
# use custom ArrayWritable types and converters to handle arrays
|
|
|
|
array_data = [(1, array('d')),
|
|
|
|
(1, array('d', [1.0, 2.0, 3.0])),
|
|
|
|
(2, array('d', [3.0, 4.0, 5.0]))]
|
|
|
|
self.sc.parallelize(array_data).saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/newhadoop/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.spark.api.python.DoubleArrayWritable",
|
|
|
|
valueConverter="org.apache.spark.api.python.DoubleArrayToWritableConverter")
|
|
|
|
result = sorted(self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/newhadoop/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.spark.api.python.DoubleArrayWritable",
|
|
|
|
valueConverter="org.apache.spark.api.python.WritableToDoubleArrayConverter").collect())
|
|
|
|
self.assertEqual(result, array_data)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
conf = {
|
|
|
|
"mapreduce.outputformat.class":
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
"mapred.output.key.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.value.class": "org.apache.spark.api.python.DoubleArrayWritable",
|
|
|
|
"mapred.output.dir": basepath + "/newdataset/"
|
|
|
|
}
|
|
|
|
self.sc.parallelize(array_data).saveAsNewAPIHadoopDataset(
|
|
|
|
conf,
|
2014-07-30 16:19:05 -04:00
|
|
|
valueConverter="org.apache.spark.api.python.DoubleArrayToWritableConverter")
|
2014-08-06 15:58:24 -04:00
|
|
|
input_conf = {"mapred.input.dir": basepath + "/newdataset/"}
|
2014-07-30 16:19:05 -04:00
|
|
|
new_dataset = sorted(self.sc.newAPIHadoopRDD(
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.spark.api.python.DoubleArrayWritable",
|
|
|
|
valueConverter="org.apache.spark.api.python.WritableToDoubleArrayConverter",
|
|
|
|
conf=input_conf).collect())
|
|
|
|
self.assertEqual(new_dataset, array_data)
|
|
|
|
|
|
|
|
def test_newolderror(self):
|
|
|
|
basepath = self.tempdir.name
|
2014-08-06 15:58:24 -04:00
|
|
|
rdd = self.sc.parallelize(range(1, 4)).map(lambda x: (x, "a" * x))
|
2014-07-30 16:19:05 -04:00
|
|
|
self.assertRaises(Exception, lambda: rdd.saveAsHadoopFile(
|
|
|
|
basepath + "/newolderror/saveAsHadoopFile/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat"))
|
|
|
|
self.assertRaises(Exception, lambda: rdd.saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/newolderror/saveAsNewAPIHadoopFile/",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileOutputFormat"))
|
|
|
|
|
|
|
|
def test_bad_inputs(self):
|
|
|
|
basepath = self.tempdir.name
|
2014-08-06 15:58:24 -04:00
|
|
|
rdd = self.sc.parallelize(range(1, 4)).map(lambda x: (x, "a" * x))
|
2014-07-30 16:19:05 -04:00
|
|
|
self.assertRaises(Exception, lambda: rdd.saveAsHadoopFile(
|
|
|
|
basepath + "/badinputs/saveAsHadoopFile/",
|
|
|
|
"org.apache.hadoop.mapred.NotValidOutputFormat"))
|
|
|
|
self.assertRaises(Exception, lambda: rdd.saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/badinputs/saveAsNewAPIHadoopFile/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.NotValidOutputFormat"))
|
|
|
|
|
|
|
|
def test_converters(self):
|
|
|
|
# use of custom converters
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
data = [(1, {3.0: u'bb'}),
|
|
|
|
(2, {1.0: u'aa'}),
|
|
|
|
(3, {2.0: u'dd'})]
|
|
|
|
self.sc.parallelize(data).saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/converters/",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
keyConverter="org.apache.spark.api.python.TestOutputKeyConverter",
|
|
|
|
valueConverter="org.apache.spark.api.python.TestOutputValueConverter")
|
|
|
|
converted = sorted(self.sc.sequenceFile(basepath + "/converters/").collect())
|
|
|
|
expected = [(u'1', 3.0),
|
|
|
|
(u'2', 1.0),
|
|
|
|
(u'3', 2.0)]
|
|
|
|
self.assertEqual(converted, expected)
|
|
|
|
|
|
|
|
def test_reserialization(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
x = range(1, 5)
|
|
|
|
y = range(1001, 1005)
|
|
|
|
data = zip(x, y)
|
|
|
|
rdd = self.sc.parallelize(x).zip(self.sc.parallelize(y))
|
|
|
|
rdd.saveAsSequenceFile(basepath + "/reserialize/sequence")
|
|
|
|
result1 = sorted(self.sc.sequenceFile(basepath + "/reserialize/sequence").collect())
|
|
|
|
self.assertEqual(result1, data)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
rdd.saveAsHadoopFile(
|
|
|
|
basepath + "/reserialize/hadoop",
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileOutputFormat")
|
2014-07-30 16:19:05 -04:00
|
|
|
result2 = sorted(self.sc.sequenceFile(basepath + "/reserialize/hadoop").collect())
|
|
|
|
self.assertEqual(result2, data)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
rdd.saveAsNewAPIHadoopFile(
|
|
|
|
basepath + "/reserialize/newhadoop",
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat")
|
2014-07-30 16:19:05 -04:00
|
|
|
result3 = sorted(self.sc.sequenceFile(basepath + "/reserialize/newhadoop").collect())
|
|
|
|
self.assertEqual(result3, data)
|
|
|
|
|
|
|
|
conf4 = {
|
2014-08-06 15:58:24 -04:00
|
|
|
"mapred.output.format.class": "org.apache.hadoop.mapred.SequenceFileOutputFormat",
|
|
|
|
"mapred.output.key.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.value.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.dir": basepath + "/reserialize/dataset"}
|
2014-07-30 16:19:05 -04:00
|
|
|
rdd.saveAsHadoopDataset(conf4)
|
|
|
|
result4 = sorted(self.sc.sequenceFile(basepath + "/reserialize/dataset").collect())
|
|
|
|
self.assertEqual(result4, data)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
conf5 = {"mapreduce.outputformat.class":
|
|
|
|
"org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat",
|
|
|
|
"mapred.output.key.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.value.class": "org.apache.hadoop.io.IntWritable",
|
|
|
|
"mapred.output.dir": basepath + "/reserialize/newdataset"}
|
2014-07-30 16:19:05 -04:00
|
|
|
rdd.saveAsNewAPIHadoopDataset(conf5)
|
|
|
|
result5 = sorted(self.sc.sequenceFile(basepath + "/reserialize/newdataset").collect())
|
|
|
|
self.assertEqual(result5, data)
|
|
|
|
|
|
|
|
def test_unbatched_save_and_read(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
ei = [(1, u'aa'), (1, u'aa'), (2, u'aa'), (2, u'bb'), (2, u'bb'), (3, u'cc')]
|
2014-09-20 18:09:35 -04:00
|
|
|
self.sc.parallelize(ei, len(ei)).saveAsSequenceFile(
|
2014-07-30 16:19:05 -04:00
|
|
|
basepath + "/unbatched/")
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
unbatched_sequence = sorted(self.sc.sequenceFile(
|
|
|
|
basepath + "/unbatched/",
|
2014-07-30 16:19:05 -04:00
|
|
|
batchSize=1).collect())
|
|
|
|
self.assertEqual(unbatched_sequence, ei)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
unbatched_hadoopFile = sorted(self.sc.hadoopFile(
|
|
|
|
basepath + "/unbatched/",
|
2014-07-30 16:19:05 -04:00
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
batchSize=1).collect())
|
|
|
|
self.assertEqual(unbatched_hadoopFile, ei)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
unbatched_newAPIHadoopFile = sorted(self.sc.newAPIHadoopFile(
|
|
|
|
basepath + "/unbatched/",
|
2014-07-30 16:19:05 -04:00
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
batchSize=1).collect())
|
|
|
|
self.assertEqual(unbatched_newAPIHadoopFile, ei)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
oldconf = {"mapred.input.dir": basepath + "/unbatched/"}
|
2014-07-30 16:19:05 -04:00
|
|
|
unbatched_hadoopRDD = sorted(self.sc.hadoopRDD(
|
|
|
|
"org.apache.hadoop.mapred.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
conf=oldconf,
|
|
|
|
batchSize=1).collect())
|
|
|
|
self.assertEqual(unbatched_hadoopRDD, ei)
|
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
newconf = {"mapred.input.dir": basepath + "/unbatched/"}
|
2014-07-30 16:19:05 -04:00
|
|
|
unbatched_newAPIHadoopRDD = sorted(self.sc.newAPIHadoopRDD(
|
|
|
|
"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
|
|
|
|
"org.apache.hadoop.io.IntWritable",
|
|
|
|
"org.apache.hadoop.io.Text",
|
|
|
|
conf=newconf,
|
|
|
|
batchSize=1).collect())
|
|
|
|
self.assertEqual(unbatched_newAPIHadoopRDD, ei)
|
|
|
|
|
|
|
|
def test_malformed_RDD(self):
|
|
|
|
basepath = self.tempdir.name
|
|
|
|
# non-batch-serialized RDD[[(K, V)]] should be rejected
|
|
|
|
data = [[(1, "a")], [(2, "aa")], [(3, "aaa")]]
|
2014-09-20 18:09:35 -04:00
|
|
|
rdd = self.sc.parallelize(data, len(data))
|
2014-07-30 16:19:05 -04:00
|
|
|
self.assertRaises(Exception, lambda: rdd.saveAsSequenceFile(
|
|
|
|
basepath + "/malformed/sequence"))
|
SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormats
So I finally resurrected this PR. It seems the old one against the incubator mirror is no longer available, so I cannot reference it.
This adds initial support for reading Hadoop ```SequenceFile```s, as well as arbitrary Hadoop ```InputFormat```s, in PySpark.
# Overview
The basics are as follows:
1. ```PythonRDD``` object contains the relevant methods, that are in turn invoked by ```SparkContext``` in PySpark
2. The SequenceFile or InputFormat is read on the Scala side and converted from ```Writable``` instances to the relevant Scala classes (in the case of primitives)
3. Pyrolite is used to serialize Java objects. If this fails, the fallback is ```toString```
4. ```PickleSerializer``` on the Python side deserializes.
This works "out the box" for simple ```Writable```s:
* ```Text```
* ```IntWritable```, ```DoubleWritable```, ```FloatWritable```
* ```NullWritable```
* ```BooleanWritable```
* ```BytesWritable```
* ```MapWritable```
It also works for simple, "struct-like" classes. Due to the way Pyrolite works, this requires that the classes satisfy the JavaBeans convenstions (i.e. with fields and a no-arg constructor and getters/setters). (Perhaps in future some sugar for case classes and reflection could be added).
I've tested it out with ```ESInputFormat``` as an example and it works very nicely:
```python
conf = {"es.resource" : "index/type" }
rdd = sc.newAPIHadoopRDD("org.elasticsearch.hadoop.mr.EsInputFormat", "org.apache.hadoop.io.NullWritable", "org.elasticsearch.hadoop.mr.LinkedMapWritable", conf=conf)
rdd.first()
```
I suspect for things like HBase/Cassandra it will be a bit trickier to get it to work out the box.
# Some things still outstanding:
1. ~~Requires ```msgpack-python``` and will fail without it. As originally discussed with Josh, add a ```as_strings``` argument that defaults to ```False```, that can be used if ```msgpack-python``` is not available~~
2. ~~I see from https://github.com/apache/spark/pull/363 that Pyrolite is being used there for SerDe between Scala and Python. @ahirreddy @mateiz what is the plan behind this - is Pyrolite preferred? It seems from a cursory glance that adapting the ```msgpack```-based SerDe here to use Pyrolite wouldn't be too hard~~
3. ~~Support the key and value "wrapper" that would allow a Scala/Java function to be plugged in that would transform whatever the key/value Writable class is into something that can be serialized (e.g. convert some custom Writable to a JavaBean or ```java.util.Map``` that can be easily serialized)~~
4. Support ```saveAsSequenceFile``` and ```saveAsHadoopFile``` etc. This would require SerDe in the reverse direction, that can be handled by Pyrolite. Will work on this as a separate PR
Author: Nick Pentreath <nick.pentreath@gmail.com>
Closes #455 from MLnick/pyspark-inputformats and squashes the following commits:
268df7e [Nick Pentreath] Documentation changes mer @pwendell comments
761269b [Nick Pentreath] Address @pwendell comments, simplify default writable conversions and remove registry.
4c972d8 [Nick Pentreath] Add license headers
d150431 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
cde6af9 [Nick Pentreath] Parameterize converter trait
5ebacfa [Nick Pentreath] Update docs for PySpark input formats
a985492 [Nick Pentreath] Move Converter examples to own package
365d0be [Nick Pentreath] Make classes private[python]. Add docs and @Experimental annotation to Converter interface.
eeb8205 [Nick Pentreath] Fix path relative to SPARK_HOME in tests
1eaa08b [Nick Pentreath] HBase -> Cassandra app name oversight
3f90c3e [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
2c18513 [Nick Pentreath] Add examples for reading HBase and Cassandra InputFormats from Python
b65606f [Nick Pentreath] Add converter interface
5757f6e [Nick Pentreath] Default key/value classes for sequenceFile asre None
085b55f [Nick Pentreath] Move input format tests to tests.py and clean up docs
43eb728 [Nick Pentreath] PySpark InputFormats docs into programming guide
94beedc [Nick Pentreath] Clean up args in PythonRDD. Set key/value converter defaults to None for PySpark context.py methods
1a4a1d6 [Nick Pentreath] Address @mateiz style comments
01e0813 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
15a7d07 [Nick Pentreath] Remove default args for key/value classes. Arg names to camelCase
9fe6bd5 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
84fe8e3 [Nick Pentreath] Python programming guide space formatting
d0f52b6 [Nick Pentreath] Python programming guide
7caa73a [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
93ef995 [Nick Pentreath] Add back context.py changes
9ef1896 [Nick Pentreath] Recover earlier changes lost in previous merge for serializers.py
077ecb2 [Nick Pentreath] Recover earlier changes lost in previous merge for context.py
5af4770 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
35b8e3a [Nick Pentreath] Another fix for test ordering
bef3afb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e001b94 [Nick Pentreath] Fix test failures due to ordering
78978d9 [Nick Pentreath] Add doc for SequenceFile and InputFormat support to Python programming guide
64eb051 [Nick Pentreath] Scalastyle fix
e7552fa [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
44f2857 [Nick Pentreath] Remove msgpack dependency and switch serialization to Pyrolite, plus some clean up and refactoring
c0ebfb6 [Nick Pentreath] Change sequencefile test data generator to easily be called from PySpark tests
1d7c17c [Nick Pentreath] Amend tests to auto-generate sequencefile data in temp dir
17a656b [Nick Pentreath] remove binary sequencefile for tests
f60959e [Nick Pentreath] Remove msgpack dependency and serializer from PySpark
450e0a2 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
31a2fff [Nick Pentreath] Scalastyle fixes
fc5099e [Nick Pentreath] Add Apache license headers
4e08983 [Nick Pentreath] Clean up docs for PySpark context methods
b20ec7e [Nick Pentreath] Clean up merge duplicate dependencies
951c117 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
f6aac55 [Nick Pentreath] Bring back msgpack
9d2256e [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
1bbbfb0 [Nick Pentreath] Clean up SparkBuild from merge
a67dfad [Nick Pentreath] Clean up Msgpack serialization and registering
7237263 [Nick Pentreath] Add back msgpack serializer and hadoop file code lost during merging
25da1ca [Nick Pentreath] Add generator for nulls, bools, bytes and maps
65360d5 [Nick Pentreath] Adding test SequenceFiles
0c612e5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
d72bf18 [Nick Pentreath] msgpack
dd57922 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
e67212a [Nick Pentreath] Add back msgpack dependency
f2d76a0 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
41856a5 [Nick Pentreath] Merge branch 'master' into pyspark-inputformats
97ef708 [Nick Pentreath] Remove old writeToStream
2beeedb [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
795a763 [Nick Pentreath] Change name to WriteInputFormatTestDataGenerator. Cleanup some var names. Use SPARK_HOME in path for writing test sequencefile data.
174f520 [Nick Pentreath] Add back graphx settings
703ee65 [Nick Pentreath] Add back msgpack
619c0fa [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
1c8efbc [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
eb40036 [Nick Pentreath] Remove unused comment lines
4d7ef2e [Nick Pentreath] Fix indentation
f1d73e3 [Nick Pentreath] mergeConfs returns a copy rather than mutating one of the input arguments
0f5cd84 [Nick Pentreath] Remove unused pair UTF8 class. Add comments to msgpack deserializer
4294cbb [Nick Pentreath] Add old Hadoop api methods. Clean up and expand comments. Clean up argument names
818a1e6 [Nick Pentreath] Add seqencefile and Hadoop InputFormat support to PythonRDD
4e7c9e3 [Nick Pentreath] Merge remote-tracking branch 'upstream/master' into pyspark-inputformats
c304cc8 [Nick Pentreath] Adding supporting sequncefiles for tests. Cleaning up
4b0a43f [Nick Pentreath] Refactoring utils into own objects. Cleaning up old commented-out code
d86325f [Nick Pentreath] Initial WIP of PySpark support for SequenceFile and arbitrary Hadoop InputFormat
2014-06-10 01:21:03 -04:00
|
|
|
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2013-05-10 18:48:48 -04:00
|
|
|
class TestDaemon(unittest.TestCase):
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2013-05-10 18:48:48 -04:00
|
|
|
def connect(self, port):
|
|
|
|
from socket import socket, AF_INET, SOCK_STREAM
|
|
|
|
sock = socket(AF_INET, SOCK_STREAM)
|
|
|
|
sock.connect(('127.0.0.1', port))
|
|
|
|
# send a split index of -1 to shutdown the worker
|
|
|
|
sock.send("\xFF\xFF\xFF\xFF")
|
|
|
|
sock.close()
|
|
|
|
return True
|
|
|
|
|
|
|
|
def do_termination_test(self, terminator):
|
|
|
|
from subprocess import Popen, PIPE
|
|
|
|
from errno import ECONNREFUSED
|
|
|
|
|
|
|
|
# start daemon
|
|
|
|
daemon_path = os.path.join(os.path.dirname(__file__), "daemon.py")
|
|
|
|
daemon = Popen([sys.executable, daemon_path], stdin=PIPE, stdout=PIPE)
|
|
|
|
|
|
|
|
# read the port number
|
|
|
|
port = read_int(daemon.stdout)
|
|
|
|
|
|
|
|
# daemon should accept connections
|
|
|
|
self.assertTrue(self.connect(port))
|
|
|
|
|
|
|
|
# request shutdown
|
|
|
|
terminator(daemon)
|
|
|
|
time.sleep(1)
|
|
|
|
|
|
|
|
# daemon should no longer accept connections
|
2013-08-14 18:12:12 -04:00
|
|
|
try:
|
2013-05-10 18:48:48 -04:00
|
|
|
self.connect(port)
|
2013-08-14 18:12:12 -04:00
|
|
|
except EnvironmentError as exception:
|
|
|
|
self.assertEqual(exception.errno, ECONNREFUSED)
|
|
|
|
else:
|
|
|
|
self.fail("Expected EnvironmentError to be raised")
|
2013-05-10 18:48:48 -04:00
|
|
|
|
|
|
|
def test_termination_stdin(self):
|
|
|
|
"""Ensure that daemon and workers terminate when stdin is closed."""
|
|
|
|
self.do_termination_test(lambda daemon: daemon.stdin.close())
|
|
|
|
|
|
|
|
def test_termination_sigterm(self):
|
|
|
|
"""Ensure that daemon and workers terminate on SIGTERM."""
|
|
|
|
from signal import SIGTERM
|
|
|
|
self.do_termination_test(lambda daemon: os.kill(daemon.pid, SIGTERM))
|
|
|
|
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
|
2014-08-03 18:52:00 -04:00
|
|
|
class TestWorker(PySparkTestCase):
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2014-08-03 18:52:00 -04:00
|
|
|
def test_cancel_task(self):
|
|
|
|
temp = tempfile.NamedTemporaryFile(delete=True)
|
|
|
|
temp.close()
|
|
|
|
path = temp.name
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2014-08-03 18:52:00 -04:00
|
|
|
def sleep(x):
|
2014-08-06 15:58:24 -04:00
|
|
|
import os
|
|
|
|
import time
|
2014-08-03 18:52:00 -04:00
|
|
|
with open(path, 'w') as f:
|
|
|
|
f.write("%d %d" % (os.getppid(), os.getpid()))
|
|
|
|
time.sleep(100)
|
|
|
|
|
|
|
|
# start job in background thread
|
|
|
|
def run():
|
|
|
|
self.sc.parallelize(range(1)).foreach(sleep)
|
|
|
|
import threading
|
|
|
|
t = threading.Thread(target=run)
|
|
|
|
t.daemon = True
|
|
|
|
t.start()
|
|
|
|
|
|
|
|
daemon_pid, worker_pid = 0, 0
|
|
|
|
while True:
|
|
|
|
if os.path.exists(path):
|
|
|
|
data = open(path).read().split(' ')
|
|
|
|
daemon_pid, worker_pid = map(int, data)
|
|
|
|
break
|
|
|
|
time.sleep(0.1)
|
|
|
|
|
|
|
|
# cancel jobs
|
|
|
|
self.sc.cancelAllJobs()
|
|
|
|
t.join()
|
|
|
|
|
|
|
|
for i in range(50):
|
|
|
|
try:
|
|
|
|
os.kill(worker_pid, 0)
|
|
|
|
time.sleep(0.1)
|
|
|
|
except OSError:
|
2014-08-06 15:58:24 -04:00
|
|
|
break # worker was killed
|
2014-08-03 18:52:00 -04:00
|
|
|
else:
|
|
|
|
self.fail("worker has not been killed after 5 seconds")
|
|
|
|
|
|
|
|
try:
|
|
|
|
os.kill(daemon_pid, 0)
|
|
|
|
except OSError:
|
|
|
|
self.fail("daemon had been killed")
|
|
|
|
|
2014-09-13 19:22:04 -04:00
|
|
|
# run a normal job
|
|
|
|
rdd = self.sc.parallelize(range(100), 1)
|
|
|
|
self.assertEqual(100, rdd.map(str).count())
|
|
|
|
|
2014-08-03 18:52:00 -04:00
|
|
|
def test_fd_leak(self):
|
2014-08-06 15:58:24 -04:00
|
|
|
N = 1100 # fd limit is 1024 by default
|
2014-08-03 18:52:00 -04:00
|
|
|
rdd = self.sc.parallelize(range(N), N)
|
|
|
|
self.assertEquals(N, rdd.count())
|
|
|
|
|
2014-09-13 19:22:04 -04:00
|
|
|
def test_after_exception(self):
|
|
|
|
def raise_exception(_):
|
|
|
|
raise Exception()
|
|
|
|
rdd = self.sc.parallelize(range(100), 1)
|
|
|
|
self.assertRaises(Exception, lambda: rdd.foreach(raise_exception))
|
|
|
|
self.assertEqual(100, rdd.map(str).count())
|
|
|
|
|
|
|
|
def test_after_jvm_exception(self):
|
|
|
|
tempFile = tempfile.NamedTemporaryFile(delete=False)
|
|
|
|
tempFile.write("Hello World!")
|
|
|
|
tempFile.close()
|
|
|
|
data = self.sc.textFile(tempFile.name, 1)
|
|
|
|
filtered_data = data.filter(lambda x: True)
|
|
|
|
self.assertEqual(1, filtered_data.count())
|
|
|
|
os.unlink(tempFile.name)
|
|
|
|
self.assertRaises(Exception, lambda: filtered_data.count())
|
|
|
|
|
|
|
|
rdd = self.sc.parallelize(range(100), 1)
|
|
|
|
self.assertEqual(100, rdd.map(str).count())
|
|
|
|
|
|
|
|
def test_accumulator_when_reuse_worker(self):
|
|
|
|
from pyspark.accumulators import INT_ACCUMULATOR_PARAM
|
|
|
|
acc1 = self.sc.accumulator(0, INT_ACCUMULATOR_PARAM)
|
|
|
|
self.sc.parallelize(range(100), 20).foreach(lambda x: acc1.add(x))
|
|
|
|
self.assertEqual(sum(range(100)), acc1.value)
|
|
|
|
|
|
|
|
acc2 = self.sc.accumulator(0, INT_ACCUMULATOR_PARAM)
|
|
|
|
self.sc.parallelize(range(100), 20).foreach(lambda x: acc2.add(x))
|
|
|
|
self.assertEqual(sum(range(100)), acc2.value)
|
|
|
|
self.assertEqual(sum(range(100)), acc1.value)
|
|
|
|
|
2014-08-03 18:52:00 -04:00
|
|
|
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
class TestSparkSubmit(unittest.TestCase):
|
2014-08-06 15:58:24 -04:00
|
|
|
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
def setUp(self):
|
|
|
|
self.programDir = tempfile.mkdtemp()
|
|
|
|
self.sparkSubmit = os.path.join(os.environ.get("SPARK_HOME"), "bin", "spark-submit")
|
|
|
|
|
|
|
|
def tearDown(self):
|
|
|
|
shutil.rmtree(self.programDir)
|
|
|
|
|
|
|
|
def createTempFile(self, name, content):
|
|
|
|
"""
|
|
|
|
Create a temp file with the given name and content and return its path.
|
|
|
|
Strips leading spaces from content up to the first '|' in each line.
|
|
|
|
"""
|
|
|
|
pattern = re.compile(r'^ *\|', re.MULTILINE)
|
|
|
|
content = re.sub(pattern, '', content.strip())
|
|
|
|
path = os.path.join(self.programDir, name)
|
|
|
|
with open(path, "w") as f:
|
|
|
|
f.write(content)
|
|
|
|
return path
|
|
|
|
|
|
|
|
def createFileInZip(self, name, content):
|
|
|
|
"""
|
|
|
|
Create a zip archive containing a file with the given content and return its path.
|
|
|
|
Strips leading spaces from content up to the first '|' in each line.
|
|
|
|
"""
|
|
|
|
pattern = re.compile(r'^ *\|', re.MULTILINE)
|
|
|
|
content = re.sub(pattern, '', content.strip())
|
|
|
|
path = os.path.join(self.programDir, name + ".zip")
|
2014-08-11 14:54:09 -04:00
|
|
|
zip = zipfile.ZipFile(path, 'w')
|
|
|
|
zip.writestr(name, content)
|
|
|
|
zip.close()
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
return path
|
|
|
|
|
|
|
|
def test_single_script(self):
|
|
|
|
"""Submit and test a single script file"""
|
|
|
|
script = self.createTempFile("test.py", """
|
|
|
|
|from pyspark import SparkContext
|
|
|
|
|
|
|
|
|
|sc = SparkContext()
|
|
|
|
|print sc.parallelize([1, 2, 3]).map(lambda x: x * 2).collect()
|
|
|
|
""")
|
|
|
|
proc = subprocess.Popen([self.sparkSubmit, script], stdout=subprocess.PIPE)
|
|
|
|
out, err = proc.communicate()
|
|
|
|
self.assertEqual(0, proc.returncode)
|
|
|
|
self.assertIn("[2, 4, 6]", out)
|
|
|
|
|
|
|
|
def test_script_with_local_functions(self):
|
|
|
|
"""Submit and test a single script file calling a global function"""
|
|
|
|
script = self.createTempFile("test.py", """
|
|
|
|
|from pyspark import SparkContext
|
|
|
|
|
|
|
|
|
|def foo(x):
|
|
|
|
| return x * 3
|
|
|
|
|
|
|
|
|
|sc = SparkContext()
|
|
|
|
|print sc.parallelize([1, 2, 3]).map(foo).collect()
|
|
|
|
""")
|
|
|
|
proc = subprocess.Popen([self.sparkSubmit, script], stdout=subprocess.PIPE)
|
|
|
|
out, err = proc.communicate()
|
|
|
|
self.assertEqual(0, proc.returncode)
|
|
|
|
self.assertIn("[3, 6, 9]", out)
|
|
|
|
|
|
|
|
def test_module_dependency(self):
|
|
|
|
"""Submit and test a script with a dependency on another module"""
|
|
|
|
script = self.createTempFile("test.py", """
|
|
|
|
|from pyspark import SparkContext
|
|
|
|
|from mylib import myfunc
|
|
|
|
|
|
|
|
|
|sc = SparkContext()
|
|
|
|
|print sc.parallelize([1, 2, 3]).map(myfunc).collect()
|
|
|
|
""")
|
|
|
|
zip = self.createFileInZip("mylib.py", """
|
|
|
|
|def myfunc(x):
|
|
|
|
| return x + 1
|
|
|
|
""")
|
|
|
|
proc = subprocess.Popen([self.sparkSubmit, "--py-files", zip, script],
|
2014-07-22 01:30:53 -04:00
|
|
|
stdout=subprocess.PIPE)
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
out, err = proc.communicate()
|
|
|
|
self.assertEqual(0, proc.returncode)
|
|
|
|
self.assertIn("[2, 3, 4]", out)
|
|
|
|
|
|
|
|
def test_module_dependency_on_cluster(self):
|
|
|
|
"""Submit and test a script with a dependency on another module on a cluster"""
|
|
|
|
script = self.createTempFile("test.py", """
|
|
|
|
|from pyspark import SparkContext
|
|
|
|
|from mylib import myfunc
|
|
|
|
|
|
|
|
|
|sc = SparkContext()
|
|
|
|
|print sc.parallelize([1, 2, 3]).map(myfunc).collect()
|
|
|
|
""")
|
|
|
|
zip = self.createFileInZip("mylib.py", """
|
|
|
|
|def myfunc(x):
|
|
|
|
| return x + 1
|
|
|
|
""")
|
2014-08-06 15:58:24 -04:00
|
|
|
proc = subprocess.Popen([self.sparkSubmit, "--py-files", zip, "--master",
|
|
|
|
"local-cluster[1,1,512]", script],
|
|
|
|
stdout=subprocess.PIPE)
|
[SPARK-1549] Add Python support to spark-submit
This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
Author: Matei Zaharia <matei@databricks.com>
Closes #664 from mateiz/py-submit and squashes the following commits:
15e9669 [Matei Zaharia] Fix some uses of path.separator property
051278c [Matei Zaharia] Small style fixes
0afe886 [Matei Zaharia] Add license headers
4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
2014-05-06 18:12:35 -04:00
|
|
|
out, err = proc.communicate()
|
|
|
|
self.assertEqual(0, proc.returncode)
|
|
|
|
self.assertIn("[2, 3, 4]", out)
|
|
|
|
|
|
|
|
def test_single_script_on_cluster(self):
|
|
|
|
"""Submit and test a single script on a cluster"""
|
|
|
|
script = self.createTempFile("test.py", """
|
|
|
|
|from pyspark import SparkContext
|
|
|
|
|
|
|
|
|
|def foo(x):
|
|
|
|
| return x * 2
|
|
|
|
|
|
|
|
|
|sc = SparkContext()
|
|
|
|
|print sc.parallelize([1, 2, 3]).map(foo).collect()
|
|
|
|
""")
|
|
|
|
proc = subprocess.Popen(
|
|
|
|
[self.sparkSubmit, "--master", "local-cluster[1,1,512]", script],
|
|
|
|
stdout=subprocess.PIPE)
|
|
|
|
out, err = proc.communicate()
|
|
|
|
self.assertEqual(0, proc.returncode)
|
|
|
|
self.assertIn("[2, 4, 6]", out)
|
|
|
|
|
|
|
|
|
2014-09-09 21:54:54 -04:00
|
|
|
class ContextStopTests(unittest.TestCase):
|
|
|
|
|
|
|
|
def test_stop(self):
|
|
|
|
sc = SparkContext()
|
|
|
|
self.assertNotEqual(SparkContext._active_spark_context, None)
|
|
|
|
sc.stop()
|
|
|
|
self.assertEqual(SparkContext._active_spark_context, None)
|
|
|
|
|
|
|
|
def test_with(self):
|
|
|
|
with SparkContext() as sc:
|
|
|
|
self.assertNotEqual(SparkContext._active_spark_context, None)
|
|
|
|
self.assertEqual(SparkContext._active_spark_context, None)
|
|
|
|
|
|
|
|
def test_with_exception(self):
|
|
|
|
try:
|
|
|
|
with SparkContext() as sc:
|
|
|
|
self.assertNotEqual(SparkContext._active_spark_context, None)
|
|
|
|
raise Exception()
|
|
|
|
except:
|
|
|
|
pass
|
|
|
|
self.assertEqual(SparkContext._active_spark_context, None)
|
|
|
|
|
|
|
|
def test_with_stop(self):
|
|
|
|
with SparkContext() as sc:
|
|
|
|
self.assertNotEqual(SparkContext._active_spark_context, None)
|
|
|
|
sc.stop()
|
|
|
|
self.assertEqual(SparkContext._active_spark_context, None)
|
|
|
|
|
|
|
|
|
2014-05-31 17:59:09 -04:00
|
|
|
@unittest.skipIf(not _have_scipy, "SciPy not installed")
|
|
|
|
class SciPyTests(PySparkTestCase):
|
2014-08-06 15:58:24 -04:00
|
|
|
|
2014-05-31 17:59:09 -04:00
|
|
|
"""General PySpark tests that depend on scipy """
|
|
|
|
|
|
|
|
def test_serialize(self):
|
|
|
|
from scipy.special import gammaln
|
|
|
|
x = range(1, 5)
|
|
|
|
expected = map(gammaln, x)
|
|
|
|
observed = self.sc.parallelize(x).map(gammaln).collect()
|
|
|
|
self.assertEqual(expected, observed)
|
|
|
|
|
|
|
|
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
@unittest.skipIf(not _have_numpy, "NumPy not installed")
|
|
|
|
class NumPyTests(PySparkTestCase):
|
2014-08-06 15:58:24 -04:00
|
|
|
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
"""General PySpark tests that depend on numpy """
|
|
|
|
|
|
|
|
def test_statcounter_array(self):
|
2014-08-06 15:58:24 -04:00
|
|
|
x = self.sc.parallelize([np.array([1.0, 1.0]), np.array([2.0, 2.0]), np.array([3.0, 3.0])])
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
s = x.stats()
|
2014-08-06 15:58:24 -04:00
|
|
|
self.assertSequenceEqual([2.0, 2.0], s.mean().tolist())
|
|
|
|
self.assertSequenceEqual([1.0, 1.0], s.min().tolist())
|
|
|
|
self.assertSequenceEqual([3.0, 3.0], s.max().tolist())
|
|
|
|
self.assertSequenceEqual([1.0, 1.0], s.sampleStdev().tolist())
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
|
|
|
|
|
2013-01-16 22:15:14 -05:00
|
|
|
if __name__ == "__main__":
|
2014-05-31 17:59:09 -04:00
|
|
|
if not _have_scipy:
|
|
|
|
print "NOTE: Skipping SciPy tests as it does not seem to be installed"
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
if not _have_numpy:
|
|
|
|
print "NOTE: Skipping NumPy tests as it does not seem to be installed"
|
2013-01-16 22:15:14 -05:00
|
|
|
unittest.main()
|
2014-05-31 17:59:09 -04:00
|
|
|
if not _have_scipy:
|
|
|
|
print "NOTE: SciPy tests were skipped as it does not seem to be installed"
|
StatCounter on NumPy arrays [PYSPARK][SPARK-2012]
These changes allow StatCounters to work properly on NumPy arrays, to fix the issue reported here (https://issues.apache.org/jira/browse/SPARK-2012).
If NumPy is installed, the NumPy functions ``maximum``, ``minimum``, and ``sqrt``, which work on arrays, are used to merge statistics. If not, we fall back on scalar operators, so it will work on arrays with NumPy, but will also work without NumPy.
New unit tests added, along with a check for NumPy in the tests.
Author: Jeremy Freeman <the.freeman.lab@gmail.com>
Closes #1725 from freeman-lab/numpy-max-statcounter and squashes the following commits:
fe973b1 [Jeremy Freeman] Avoid duplicate array import in tests
7f0e397 [Jeremy Freeman] Refactored check for numpy
8e764dd [Jeremy Freeman] Explicit numpy imports
875414c [Jeremy Freeman] Fixed indents
1c8a832 [Jeremy Freeman] Unit tests for StatCounter with NumPy arrays
176a127 [Jeremy Freeman] Use numpy arrays in StatCounter
2014-08-02 01:33:25 -04:00
|
|
|
if not _have_numpy:
|
|
|
|
print "NOTE: NumPy tests were skipped as it does not seem to be installed"
|