Josh Rosen
49be084ed3
Use File.pathSeparator instead of hardcoding ':'.
2013-07-29 22:08:57 -07:00
Josh Rosen
b95732632b
Do not inherit master's PYTHONPATH on workers.
...
This fixes SPARK-832, an issue where PySpark
would not work when the master and workers used
different SPARK_HOME paths.
This change may potentially break code that relied
on the master's PYTHONPATH being used on workers.
To have custom PYTHONPATH additions used on the
workers, users should set a custom PYTHONPATH in
spark-env.sh rather than setting it in the shell.
2013-07-29 22:08:57 -07:00
Matei Zaharia
b9d6783f36
Optimize Python take() to not compute entire first partition
2013-07-29 02:51:43 -04:00
Josh Rosen
f649dabb4a
Fix bug: DoubleRDDFunctions.sampleStdev() computed non-sample stdev().
...
Update JavaDoubleRDD to add new methods and docs.
Fixes SPARK-825.
2013-07-22 13:21:48 -07:00
Matei Zaharia
af3c9d5042
Add Apache license headers and LICENSE and NOTICE files
2013-07-16 17:21:33 -07:00
seanm
a1662326e9
comment adjustment to takeOrdered
2013-07-12 08:38:19 -07:00
seanm
ee4ce2fc51
adding takeOrdered to java API
2013-07-10 10:46:04 -07:00
root
ec31e68d5d
Fixed PySpark perf regression by not using socket.makefile(), and improved
...
debuggability by letting "print" statements show up in the executor's stderr
Conflicts:
core/src/main/scala/spark/api/python/PythonRDD.scala
2013-07-01 06:26:31 +00:00
root
3296d132b6
Fix performance bug with new Python code not using buffered streams
2013-07-01 06:25:43 +00:00
Jey Kottalam
1ba3c17303
use parens when calling method with side-effects
2013-06-21 12:14:16 -04:00
Jey Kottalam
edb18ca928
Rename PythonWorker to PythonWorkerFactory
2013-06-21 12:14:16 -04:00
Jey Kottalam
62c4781400
Add tests and fixes for Python daemon shutdown
2013-06-21 12:14:16 -04:00
Jey Kottalam
c79a6078c3
Prefork Python worker processes
2013-06-21 12:14:16 -04:00
Jey Kottalam
40afe0d2a5
Add Python timing instrumentation
2013-06-21 12:14:16 -04:00
Matei Zaharia
f961aac8b2
Merge pull request #649 from ryanlecompte/master
...
Add top K method to RDD using a bounded priority queue
2013-06-15 00:53:41 -07:00
ryanlecompte
e8801d4490
use delegation for BoundedPriorityQueue, add Java API
2013-06-14 23:39:05 -07:00
Patrick Wendell
ef14dc2e77
Adding Java-API version of compression codec
2013-06-09 18:09:46 -07:00
Shivaram Venkataraman
bb8a434f9d
Add zipPartitions to Java API.
2013-05-03 15:14:02 -07:00
Reynold Xin
4a31877408
Added the unpersist api to JavaRDD.
2013-05-01 20:31:54 -07:00
Mridul Muralidharan
d90d2af103
Checkpoint commit - compiles and passes a lot of tests - not all though, looking into FileSuite issues
2013-04-15 18:12:11 +05:30
Stephen Haberman
dd854d5b9f
Use Boolean in the Java API, and != for assert.
2013-03-23 11:49:45 -05:00
Stephen Haberman
1c67c7dfd1
Add a shuffle parameter to coalesce.
...
This is useful for when you want just 1 output file (part-00000) but
still up the upstream RDD to be computed in parallel.
2013-03-22 08:54:44 -05:00
Mark Hamstra
ca9f81e8fc
refactor foldByKey to use combineByKey
2013-03-16 13:31:01 -07:00
Mark Hamstra
857010392b
Fuller implementation of foldByKey
2013-03-15 10:56:05 -07:00
Mark Hamstra
16a4ca4537
restrict V type of foldByKey in order to retain ClassManifest; added foldByKey to Java API and test
2013-03-14 13:58:37 -07:00
Matei Zaharia
73697e2891
Fix overly large thread names in PySpark
2013-02-26 12:07:59 -08:00
Matei Zaharia
490f056cdd
Allow passing sparkHome and JARs to StreamingContext constructor
...
Also warns if spark.cleaner.ttl is not set in the version where you pass
your own SparkContext.
2013-02-25 15:13:30 -08:00
Stephen Haberman
44032bc476
Merge branch 'master' into bettersplits
...
Conflicts:
core/src/main/scala/spark/RDD.scala
core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
core/src/test/scala/spark/ShuffleSuite.scala
2013-02-24 22:08:14 -06:00
Stephen Haberman
37c7a71f9c
Add subtract to JavaRDD, JavaDoubleRDD, and JavaPairRDD.
2013-02-24 00:27:53 -06:00
Matei Zaharia
7341de0d48
Merge pull request #475 from JoshRosen/spark-668
...
Remove hack workaround for SPARK-668
2013-02-22 14:56:18 -08:00
Patrick Wendell
f8c3a03d55
SPARK-702: Replace Function --> JFunction in JavaAPI Suite.
...
In a few places the Scala (rather than Java) function class is used.
2013-02-22 12:54:15 -08:00
Matei Zaharia
7151e1e4c8
Rename "jobs" to "applications" in the standalone cluster
2013-02-17 23:23:08 -08:00
Matei Zaharia
06e5e6627f
Renamed "splits" to "partitions"
2013-02-17 22:13:26 -08:00
Stephen Haberman
4281e579c2
Update more javadocs.
2013-02-16 00:45:03 -06:00
Stephen Haberman
6cd68c31cb
Update default.parallelism docs, have StandaloneSchedulerBackend use it.
...
Only brand new RDDs (e.g. parallelize and makeRDD) now use default
parallelism, everything else uses their largest parent's partitioner
or partition size.
2013-02-16 00:29:11 -06:00
Patrick Wendell
21df6ffc13
SPARK-696: sortByKey should use 'ascending' parameter
2013-02-11 17:43:26 -08:00
Josh Rosen
e9fb25426e
Remove hack workaround for SPARK-668.
...
Renaming the type paramters solves this problem (see SPARK-694).
I tried this fix earlier, but it didn't work because I didn't run
`sbt/sbt clean` first.
2013-02-11 11:19:20 -08:00
Matei Zaharia
f750daa510
Merge pull request #452 from stephenh/misc
...
Add RDD.coalesce, clean up some RDDs, other misc.
2013-02-09 18:12:56 -08:00
Stephen Haberman
4619ee0787
Move JavaRDDLike.coalesce into the right places.
2013-02-09 20:05:42 -06:00
Stephen Haberman
fb7599870f
Fix JavaRDDLike.coalesce return type.
2013-02-09 16:10:52 -06:00
Stephen Haberman
da52b16b38
Remove RDD.coalesce default arguments.
2013-02-09 10:11:54 -06:00
Mark Hamstra
b8863a79d3
Merge branch 'master' of https://github.com/mesos/spark into commutative
...
Conflicts:
core/src/main/scala/spark/RDD.scala
2013-02-08 18:26:00 -08:00
Mark Hamstra
934a53c8b6
Change docs on 'reduce' since the merging of local reduces no longer preserves
...
ordering, so the reduce function must also be commutative.
2013-02-05 22:19:58 -08:00
Stephen Haberman
f2bc748013
Add RDD.coalesce.
2013-02-05 21:23:36 -06:00
Josh Rosen
8fbd5380b7
Fetch fewer objects in PySpark's take() method.
2013-02-03 06:44:49 +00:00
Patrick Wendell
39ab83e957
Small fix from last commit
2013-01-31 21:52:52 -08:00
Patrick Wendell
c33f0ef41a
Some style cleanup
2013-01-31 21:50:02 -08:00
Patrick Wendell
3446d5c8d6
SPARK-673: Capture and re-throw Python exceptions
...
This patch alters the Python <-> executor protocol to pass on
exception data when they occur in user Python code.
2013-01-31 18:06:11 -08:00
Matei Zaharia
ccb67ff2ca
Merge pull request #425 from stephenh/toDebugString
...
Add RDD.toDebugString.
2013-01-29 10:44:18 -08:00
Matei Zaharia
64ba6a8c2c
Simplify checkpointing code and RDD class a little:
...
- RDD's getDependencies and getSplits methods are now guaranteed to be
called only once, so subclasses can safely do computation in there
without worrying about caching the results.
- The management of a "splits_" variable that is cleared out when we
checkpoint an RDD is now done in the RDD class.
- A few of the RDD subclasses are simpler.
- CheckpointRDD's compute() method no longer assumes that it is given a
CheckpointRDDSplit -- it can work just as well on a split from the
original RDD, because it only looks at its index. This is important
because things like UnionRDD and ZippedRDD remember the parent's
splits as part of their own and wouldn't work on checkpointed parents.
- RDD.iterator can now reuse cached data if an RDD is computed before it
is checkpointed. It seems like it wouldn't do this before (it always
called iterator() on the CheckpointRDD, which read from HDFS).
2013-01-28 22:30:12 -08:00