[SPARK-10546] Check partitionId's range in ExternalSorter#spill()
See this thread for background: http://search-hadoop.com/m/q3RTt0rWvIkHAE81 We should check the range of partition Id and provide meaningful message through exception. Alternatively, we can use abs() and modulo to force the partition Id into legitimate range. However, expectation is that user should correct the logic error in his / her code. Author: tedyu <yuzhihong@gmail.com> Closes #8703 from tedyu/master.
This commit is contained in:
parent
5f46444765
commit
b231ab8938
|
@ -297,6 +297,8 @@ private[spark] class ExternalSorter[K, V, C](
|
|||
val it = collection.destructiveSortedWritablePartitionedIterator(comparator)
|
||||
while (it.hasNext) {
|
||||
val partitionId = it.nextPartition()
|
||||
require(partitionId >= 0 && partitionId < numPartitions,
|
||||
s"partition Id: ${partitionId} should be in the range [0, ${numPartitions})")
|
||||
it.writeNext(writer)
|
||||
elementsPerPartition(partitionId) += 1
|
||||
objectsWritten += 1
|
||||
|
|
Loading…
Reference in a new issue