Commit graph

81 commits

Author SHA1 Message Date
Matei Zaharia d92d3f7938 Fix resolution of example code with Maven builds 2013-06-22 10:24:19 -07:00
Reynold Xin 43644a293f Only check for repl classes if the user is running the repl. Otherwise,
check for core classes in run. This fixed the problem that core tests
depend on whether repl module is compiled or not.
2013-05-16 14:31:38 -07:00
Mridul Muralidharan ee37612bc9 1) Add support for HADOOP_CONF_DIR (and/or YARN_CONF_DIR - use either) : which is used to specify the client side configuration directory : which needs to be part of the CLASSPATH.
2) Move from var+=".." to var="$var.." : the former does not work on older bash shells unfortunately.
2013-05-11 11:12:22 +05:30
Mridul Muralidharan e46d547ccd Fix issues reported by Reynold 2013-04-30 16:15:56 +05:30
Mike 6f68860891 Reversed the order of tests to find a scala executable (in the case when SPARK_LAUNCH_WITH_SCALA is defined): instead of checking in the PATH first, and only then (if not found) for SCALA_HOME, now we check for SCALA_HOME first, and only then (if not defined) do we look in the PATH. The advantage is that now if the user has a more recent (non-compatible) version of scala in her PATH, she can use SCALA_HOME to point to the older (compatible) version for use with spark.
Suggested by Josh Rosen in this thread:

  https://groups.google.com/forum/?fromgroups=#!topic/spark-users/NC9JKvP8808
2013-04-11 20:52:06 -07:00
Matei Zaharia eed54a25d8 Merge pull request #553 from pwendell/akka-standalone
SPARK-724 - Have Akka logging enabled by default for standalone daemons
2013-04-08 09:44:30 -07:00
Matei Zaharia 1cb3eb9762 Merge remote-tracking branch 'kalpit/master'
Conflicts:
	project/SparkBuild.scala
2013-04-07 20:54:18 -04:00
Patrick Wendell b496decf0a Updating based on code review 2013-04-07 17:44:48 -07:00
Patrick Wendell 9b68ceaa26 SPARK-724 - Have Akka logging enabled by default for standalone daemons
See the JIRA for more details.

I was only able to test the bash version (don't have Windows)
so maybe check over that the syntax is correct there.
2013-04-03 14:29:46 -07:00
Matei Zaharia 434a1ce773 Small hack to work around multiple JARs being built by sbt package 2013-02-26 12:24:18 -08:00
Matei Zaharia 5d7b591cfe Pass a code JAR to SparkContext in our examples. Fixes SPARK-594. 2013-02-25 19:34:32 -08:00
Matei Zaharia 25f737804a Change tabs to spaces 2013-02-25 11:53:55 -08:00
Tathagata Das 5ab37be983 Fixed class paths and dependencies based on Matei's comments. 2013-02-24 16:24:52 -08:00
Tathagata Das dff53d1b94 Merge branch 'mesos-master' into streaming 2013-02-24 12:17:22 -08:00
Tathagata Das fb9956256d Merge branch 'mesos-master' into streaming
Conflicts:
	core/src/main/scala/spark/rdd/CheckpointRDD.scala
	streaming/src/main/scala/spark/streaming/dstream/ReducedWindowedDStream.scala
2013-02-20 09:01:29 -08:00
haitao.yao 858784459f support customized java options for master, worker, executor, repl shell 2013-02-16 14:42:06 +08:00
Matei Zaharia 05d2e94838 Use a separate memory setting for standalone cluster daemons
Conflicts:
	docs/_config.yml
2013-02-10 21:59:41 -08:00
Tathagata Das 4cc223b478 Merge branch 'mesos-master' into streaming 2013-02-07 13:59:31 -08:00
Tathagata Das 12300758cc Merge pull request #372 from Reinvigorate/sm-kafka
Removing offset management code that is non-existent in kafka 0.7.0+
2013-02-07 12:41:07 -08:00
Matei Zaharia 4750907c3d Update run script to deal with change to build of REPL shaded JAR 2013-01-20 21:05:17 -08:00
Matei Zaharia 86057ec7c8 Merge branch 'master' into streaming
Conflicts:
	core/src/main/scala/spark/api/python/PythonRDD.scala
2013-01-20 12:47:55 -08:00
seanm 1db119a08f kafka jar wasn't being included by run script 2013-01-18 20:34:10 -07:00
Matei Zaharia 892c32a14b Warn users if they run pyspark or spark-shell without compiling Spark 2013-01-17 11:14:47 -08:00
Tathagata Das cd1521cfdb Merge branch 'master' into streaming
Conflicts:
	core/src/main/scala/spark/rdd/CoGroupedRDD.scala
	core/src/main/scala/spark/rdd/FilteredRDD.scala
	docs/_layouts/global.html
	docs/index.md
	run
2013-01-15 12:08:51 -08:00
Matei Zaharia fbb3fc4143 Merge pull request #346 from JoshRosen/python-api
Python API (PySpark)
2013-01-12 23:49:36 -08:00
Stephen Haberman c3f1675f9c Retrieve jars to a flat directory so * can be used for the classpath. 2013-01-08 14:44:33 -06:00
Tathagata Das 934ecc829a Removed streaming-env.sh.template 2013-01-06 14:15:07 -08:00
Josh Rosen b58340dbd9 Rename top-level 'pyspark' directory to 'python' 2013-01-01 15:05:00 -08:00
Josh Rosen c5cee53f20 Merge remote-tracking branch 'origin/master' into python-api
Conflicts:
	docs/quick-start.md
2012-12-29 16:00:51 -08:00
Josh Rosen 665466dfff Simplify PySpark installation.
- Bundle Py4J binaries, since it's hard to install
- Uses Spark's `run` script to launch the Py4J
  gateway, inheriting the settings in spark-env.sh

With these changes, (hopefully) nothing more than
running `sbt/sbt package` will be necessary to run
PySpark.
2012-12-27 22:47:37 -08:00
Reynold Xin eac566a7f4 Merge branch 'master' of github.com:mesos/spark into dev
Conflicts:
	core/src/main/scala/spark/MapOutputTracker.scala
	core/src/main/scala/spark/PairRDDFunctions.scala
	core/src/main/scala/spark/ParallelCollection.scala
	core/src/main/scala/spark/RDD.scala
	core/src/main/scala/spark/rdd/BlockRDD.scala
	core/src/main/scala/spark/rdd/CartesianRDD.scala
	core/src/main/scala/spark/rdd/CoGroupedRDD.scala
	core/src/main/scala/spark/rdd/CoalescedRDD.scala
	core/src/main/scala/spark/rdd/FilteredRDD.scala
	core/src/main/scala/spark/rdd/FlatMappedRDD.scala
	core/src/main/scala/spark/rdd/GlommedRDD.scala
	core/src/main/scala/spark/rdd/HadoopRDD.scala
	core/src/main/scala/spark/rdd/MapPartitionsRDD.scala
	core/src/main/scala/spark/rdd/MapPartitionsWithSplitRDD.scala
	core/src/main/scala/spark/rdd/MappedRDD.scala
	core/src/main/scala/spark/rdd/PipedRDD.scala
	core/src/main/scala/spark/rdd/SampledRDD.scala
	core/src/main/scala/spark/rdd/ShuffledRDD.scala
	core/src/main/scala/spark/rdd/UnionRDD.scala
	core/src/main/scala/spark/storage/BlockManager.scala
	core/src/main/scala/spark/storage/BlockManagerId.scala
	core/src/main/scala/spark/storage/BlockManagerMaster.scala
	core/src/main/scala/spark/storage/StorageLevel.scala
	core/src/main/scala/spark/util/MetadataCleaner.scala
	core/src/main/scala/spark/util/TimeStampedHashMap.scala
	core/src/test/scala/spark/storage/BlockManagerSuite.scala
	run
2012-12-20 14:53:40 -08:00
Matei Zaharia 01c1f97e95 Make "run" script work with Maven builds 2012-12-10 15:13:16 -08:00
Tathagata Das ae61ebaee6 Fixed bugs in RawNetworkInputDStream and in its examples. Made the ReducedWindowedDStream persist RDDs to MEMOERY_SER_ONLY by default. Removed unncessary examples. Added streaming-env.sh.template to add recommended setting for streaming. 2012-11-12 21:45:16 +00:00
Matei Zaharia 863a55ae42 Merge remote-tracking branch 'public/master' into dev
Conflicts:
	core/src/main/scala/spark/BlockStoreShuffleFetcher.scala
	core/src/main/scala/spark/KryoSerializer.scala
	core/src/main/scala/spark/MapOutputTracker.scala
	core/src/main/scala/spark/RDD.scala
	core/src/main/scala/spark/SparkContext.scala
	core/src/main/scala/spark/executor/Executor.scala
	core/src/main/scala/spark/network/Connection.scala
	core/src/main/scala/spark/network/ConnectionManagerTest.scala
	core/src/main/scala/spark/rdd/BlockRDD.scala
	core/src/main/scala/spark/rdd/NewHadoopRDD.scala
	core/src/main/scala/spark/scheduler/ShuffleMapTask.scala
	core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
	core/src/main/scala/spark/storage/BlockManager.scala
	core/src/main/scala/spark/storage/BlockMessage.scala
	core/src/main/scala/spark/storage/BlockStore.scala
	core/src/main/scala/spark/storage/StorageLevel.scala
	core/src/main/scala/spark/util/AkkaUtils.scala
	project/SparkBuild.scala
	run
2012-10-24 23:21:00 -07:00
Thomas Dudziak f595bb53d1 Tweaked run file to live more happily with typesafe's debian package 2012-10-22 13:11:05 -07:00
Matei Zaharia 4a3e9cf69c Document how to configure SPARK_MEM & co on a per-job basis 2012-10-13 16:20:25 -07:00
root ce915cadee Made run script add test-classes onto the classpath only if SPARK_TESTING is set; fixes #216 2012-10-07 04:19:16 +00:00
Matei Zaharia c535762deb Don't check for JARs in core/lib anymore 2012-10-04 15:11:43 -07:00
Matei Zaharia 1f539aa473 Update Scala version dependency to 2.9.2 2012-09-24 14:12:48 -07:00
Matei Zaharia 995982b3c9 Added a unit test for local-cluster mode and simplified some of the code involved in that 2012-09-07 17:08:36 -07:00
Matei Zaharia 47b7ebad12 Added the Spark Streaing code, ported to Akka 2 2012-07-28 20:03:26 -07:00
Matei Zaharia dc8763fcf7 Fixed SPARK_MEM not being passed when runner is java 2012-07-28 19:53:31 -07:00
Matei Zaharia 0a47284003 More work to allow Spark to run on the standalone deploy cluster. 2012-07-08 14:00:04 -07:00
Matei Zaharia 408b5a1332 More work on deploy code (adding Worker class) 2012-06-30 16:45:57 -07:00
Matei Zaharia 08cda89e8a Further fixes to how Mesos is found and used 2012-03-17 13:39:14 -07:00
Ismael Juma 4019305afe Set SCALA_VERSION to 2.9.1 (from 2.9.1.final) to match expectation of SBT 0.11.0 2011-09-26 22:44:41 +01:00
Ismael Juma 483f724d62 Upgrade to Scala 2.9.1.
Interestingly, the version in Maven is 2.9.1, but SBT outputs file to the 2.9.1.final
directory inside target.

A couple of small changes in SparkIMain were also required.

All tests pass and ./spark-shell launches successfully.
2011-08-31 10:43:05 +01:00
Matei Zaharia 0c965a102e Removed a debugging line 2011-08-29 23:13:33 -07:00
Matei Zaharia 711575391d Merge branch 'scala-2.9'
Conflicts:
	project/build/SparkProject.scala
2011-08-01 15:25:26 -07:00
Matei Zaharia 4050d661c5 Updated to newest Mesos API, which includes better memory accounting
by specifying per-executor memory.
2011-08-01 13:54:48 -07:00