Commit graph

2364 commits

Author SHA1 Message Date
tgravescs 13a19505e4 Don't call the doAs if user is unknown or the same user that is already running 2013-11-08 12:04:09 -06:00
tgravescs f95cb04e40 Remove the runAsUser as it breaks secure hdfs access 2013-11-08 10:07:15 -06:00
tgravescs 5f9ed51719 Fix access to Secure HDFS 2013-11-08 08:41:57 -06:00
Reynold Xin 3d4ad84b63 Merge pull request #148 from squito/include_appId
Include appId in executor cmd line args

add the appId back into the executor cmd line args.

I also made a pretty lame regression test, just to make sure it doesn't get dropped in the future.  not sure it will run on the build server, though, b/c `ExecutorRunner.buildCommandSeq()` expects to be abel to run the scripts in `bin`.
2013-11-07 11:08:27 -08:00
Imran Rashid ca66f5d5a2 fix formatting 2013-11-07 07:23:59 -06:00
Imran Rashid 8d3cdda9a2 very basic regression test to make sure appId doesnt get dropped in future 2013-11-07 01:35:48 -06:00
Imran Rashid 36e832bff0 include the appid in the cmd line arguments to Executors 2013-11-07 01:11:49 -06:00
jerryshao 12dc385a49 Add Spark multi-user support for standalone mode and Mesos 2013-11-07 11:18:09 +08:00
Reynold Xin aadeda5e76 Merge pull request #144 from liancheng/runjob-clean
Removed unused return value in SparkContext.runJob

Return type of this `runJob` version is `Unit`:

    def runJob[T, U: ClassManifest](
        rdd: RDD[T],
        func: (TaskContext, Iterator[T]) => U,
        partitions: Seq[Int],
        allowLocal: Boolean,
        resultHandler: (Int, U) => Unit) {
        ...
    }

It's obviously unnecessary to "return" `result`.
2013-11-06 13:27:47 -08:00
Aaron Davidson 80e98d2bd7 Attempt to fix SparkListenerSuite breakage
Could not reproduce locally, but this test could've been flaky if the
build machine was too fast.
2013-11-06 08:03:35 -08:00
Lian, Cheng a0c4565183 Removed unused return value in SparkContext.runJob 2013-11-06 23:18:59 +08:00
Reynold Xin a02eed6811 Ignore a task update status if the executor doesn't exist anymore. 2013-11-05 18:46:38 -08:00
Lian, Cheng 8b4c994e8c Using compact case class pattern matching syntax to simplify code in DAGScheduler.processEvent 2013-11-05 17:18:42 +08:00
Reynold Xin 81065321c0 Merge pull request #139 from aarondav/shuffle-next
Never store shuffle blocks in BlockManager

After the BlockId refactor (PR #114), it became very clear that ShuffleBlocks are of no use
within BlockManager (they had a no-arg constructor!). This patch completely eliminates
them, saving us around 100-150 bytes per shuffle block.
The total, system-wide overhead per shuffle block is now a flat 8 bytes, excluding
state saved by the MapOutputTracker.

Note: This should *not* be merged directly into 0.8.0 -- see #138
2013-11-04 20:47:14 -08:00
Aaron Davidson 93c90844cb Never store shuffle blocks in BlockManager
After the BlockId refactor (PR #114), it became very clear that ShuffleBlocks are of no use
within BlockManager (they had a no-arg constructor!). This patch completely eliminates
them, saving us around 100-150 bytes per shuffle block.
The total, system-wide overhead per shuffle block is now a flat 8 bytes, excluding
state saved by the MapOutputTracker.
2013-11-04 18:43:42 -08:00
Reynold Xin 0b26a392df Merge pull request #128 from shimingfei/joblogger-doc
add javadoc to JobLogger, and some small fix

against Spark-941

add javadoc to JobLogger, output more info for RDD, modify recordStageDepGraph to avoid output duplicate stage dependency information

(cherry picked from commit 518cf22eb2)
Signed-off-by: Reynold Xin <rxin@apache.org>
2013-11-04 18:22:06 -08:00
Aaron Davidson 1ba11b1c6a Minor cleanup in ShuffleBlockManager 2013-11-04 17:16:41 -08:00
Aaron Davidson 6201e5e249 Refactor ShuffleBlockManager to reduce public interface
- ShuffleBlocks has been removed and replaced by ShuffleWriterGroup.
- ShuffleWriterGroup no longer contains a reference to a ShuffleFileGroup.
- ShuffleFile has been removed and its contents are now within ShuffleFileGroup.
- ShuffleBlockManager.forShuffle has been replaced by a more stateful forMapTask.
2013-11-04 09:41:04 -08:00
Aaron Davidson b0cf19fe3c Add javadoc and remove unused code 2013-11-03 22:16:58 -08:00
Aaron Davidson 39d93ed4b9 Clean up test files properly
For some reason, even calling
java.nio.Files.createTempDirectory().getFile.deleteOnExit()
does not delete the directory on exit. Guava's analagous function
seems to work, however.
2013-11-03 21:52:59 -08:00
Aaron Davidson a0bb569a81 use OpenHashMap, remove monotonicity requirement, fix failure bug 2013-11-03 21:34:56 -08:00
Aaron Davidson 8703898d3f Address Reynold's comments 2013-11-03 21:34:44 -08:00
Aaron Davidson 3ca52309f2 Fix test breakage 2013-11-03 21:34:44 -08:00
Aaron Davidson 1592adfa25 Add documentation and address other comments 2013-11-03 21:34:44 -08:00
Aaron Davidson 7d44dec9bd Fix weird bug with specialized PrimitiveVector 2013-11-03 21:34:43 -08:00
Aaron Davidson 7453f31181 Address minor comments 2013-11-03 21:34:43 -08:00
Aaron Davidson 84991a1b91 Memory-optimized shuffle file consolidation
Overhead of each shuffle block for consolidation has been reduced from >300 bytes
to 8 bytes (1 primitive Long). Verified via profiler testing with 1 mil shuffle blocks,
net overhead was ~8,400,000 bytes.

Despite the memory-optimized implementation incurring extra CPU overhead, the runtime
of the shuffle phase in this test was only around 2% slower, while the reduce phase
was 40% faster, when compared to not using any shuffle file consolidation.
2013-11-03 21:34:13 -08:00
Reynold Xin eb5f8a3f97 Code review feedback. 2013-11-03 18:11:44 -08:00
Reynold Xin 1e9543b567 Fixed a bug that uses twice amount of memory for the primitive arrays due to a scala compiler bug.
Also addressed Matei's code review comment.
2013-11-02 23:19:01 -07:00
Reynold Xin da6bb0aedd Merge branch 'master' into hash1 2013-11-02 22:45:15 -07:00
Evan Chan f3679fd494 Add local: URI support to addFile as well 2013-11-01 11:08:03 -07:00
Matei Zaharia 8f1098a3f0 Merge pull request #117 from stephenh/avoid_concurrent_modification_exception
Handle ConcurrentModificationExceptions in SparkContext init.

System.getProperties.toMap will fail-fast when concurrently modified,
and it seems like some other thread started by SparkContext does
a System.setProperty during it's initialization.

Handle this by just looping on ConcurrentModificationException, which
seems the safest, since the non-fail-fast methods (Hastable.entrySet)
have undefined behavior under concurrent modification.
2013-10-30 20:11:48 -07:00
Matei Zaharia dc9ce16f6b Merge pull request #126 from kayousterhout/local_fix
Fixed incorrect log message in local scheduler

This change is especially relevant at the moment, because some users are seeing this failure, and the log message is misleading/incorrect (because for the tests, the max failures is set to 0, not 4)
2013-10-30 17:01:56 -07:00
Matei Zaharia 33de11c51d Merge pull request #124 from tgravescs/sparkHadoopUtilFix
Pull SparkHadoopUtil out of SparkEnv (jira SPARK-886)

Having the logic to initialize the correct SparkHadoopUtil in SparkEnv prevents it from being used until after the SparkContext is initialized.   This causes issues like https://spark-project.atlassian.net/browse/SPARK-886.  It also makes it hard to use in singleton objects.  For instance I want to use it in the security code.
2013-10-30 16:58:27 -07:00
Kay Ousterhout ff038eb4e0 Fixed incorrect log message in local scheduler 2013-10-30 15:27:23 -07:00
Matei Zaharia 618c1f6cf3 Merge pull request #125 from velvia/2013-10/local-jar-uri
Add support for local:// URI scheme for addJars()

This PR adds support for a new URI scheme for SparkContext.addJars():  `local://file/path`.
The *local* scheme indicates that the `/file/path` exists on every worker node.    The reason for its existence is for big library JARs, which would be really expensive to serve using the standard HTTP fileserver distribution method, especially for big clusters.  Today the only inexpensive method (assuming such a file is on every host, via say NFS, rsync, etc.) of doing this is to add the JAR to the SPARK_CLASSPATH, but we want a method where the user does not need to modify the Spark configuration.

I would add something to the docs, but it's not obvious where to add it.

Oh, and it would be great if this could be merged in time for 0.8.1.
2013-10-30 12:03:44 -07:00
Stephen Haberman 09f3b677cb Avoid match errors when filtering for spark.hadoop settings. 2013-10-30 12:29:39 -05:00
tgravescs f231aaa24c move the hadoopJobMetadata back into SparkEnv 2013-10-30 11:46:12 -05:00
Evan Chan de0285556a Add support for local:// URI scheme for addJars()
This indicates that a jar is available locally on each worker node.
2013-10-30 09:41:35 -07:00
tgravescs 54d9c6f253 Merge remote-tracking branch 'upstream/master' into sparkHadoopUtilFix 2013-10-30 10:41:21 -05:00
Josh Rosen cb9c8a922f Extract BlockInfo classes from BlockManager.
This saves space, since the inner classes needed
to keep a reference to the enclosing BlockManager.
2013-10-29 18:06:51 -07:00
Stephen Haberman 3a388c320c Use Properties.clone() instead. 2013-10-29 19:20:40 -05:00
Josh Rosen 846b1cf5ab Store fewer BlockInfo fields for shuffle blocks. 2013-10-29 15:14:29 -07:00
tgravescs eeb5f64c67 Remove SparkHadoopUtil stuff from SparkEnv 2013-10-29 17:12:16 -05:00
Josh Rosen 2d7cf6a271 Restructure BlockInfo fields to reduce memory use. 2013-10-27 23:01:03 -07:00
Matei Zaharia aec9bf9060 Merge pull request #112 from kayousterhout/ui_task_attempt_id
Display both task ID and task attempt ID in UI, and rename taskId to taskAttemptId

Previously only the task attempt ID was shown in the UI; this was confusing because the job can be shown as complete while there are tasks still running.  Showing the task ID in addition to the attempt ID makes it clear which tasks are redundant.

This commit also renames taskId to taskAttemptId in TaskInfo and in the local/cluster schedulers.  This identifier was used to uniquely identify attempts, not tasks, so the current naming was confusing.  The new naming is also more consistent with map reduce.
2013-10-27 19:32:00 -07:00
Stephen Haberman a6ae2b4832 Handle ConcurrentModificationExceptions in SparkContext init.
System.getProperties.toMap will fail-fast when concurrently modified,
and it seems like some other thread started by SparkContext does
a System.setProperty during it's initialization.

Handle this by just looping on ConcurrentModificationException, which
seems the safest, since the non-fail-fast methods (Hastable.entrySet)
have undefined behavior under concurrent modification.
2013-10-27 14:08:32 -05:00
Aaron Davidson 4261e834cb Use flag instead of name check. 2013-10-26 23:53:38 -07:00
Aaron Davidson 596f18479e Eliminate extra memory usage when shuffle file consolidation is disabled
Otherwise, we see SPARK-946 even when shuffle file consolidation is disabled.
Fixing SPARK-946 is still forthcoming.
2013-10-26 22:35:01 -07:00
Kay Ousterhout ae22b4dd99 Display both task ID and task index in UI 2013-10-26 22:18:39 -07:00