spark-instrumented-optimizer/python/pyspark/streaming
Takeshi YAMAMURO 256a3a8013 [SPARK-18020][STREAMING][KINESIS] Checkpoint SHARD_END to finish reading closed shards
## What changes were proposed in this pull request?
This pr is to fix an issue occurred when resharding Kinesis streams; the resharding makes the KCL throw an exception because Spark does not checkpoint `SHARD_END` when finishing reading closed shards in `KinesisRecordProcessor#shutdown`. This bug finally leads to stopping subscribing new split (or merged) shards.

## How was this patch tested?
Added a test in `KinesisStreamSuite` to check if it works well when splitting/merging shards.

Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>

Closes #16213 from maropu/SPARK-18020.
2017-01-25 17:38:48 -08:00
..
__init__.py [SPARK-6328][PYTHON] Python API for StreamingListener 2015-11-16 11:29:27 -08:00
context.py [SPARK-18447][DOCS] Fix the markdown for Note:/NOTE:/Note that across Python API documentation 2016-11-22 11:40:18 +00:00
dstream.py [MINOR] Fix Typos 'an -> a' 2016-06-06 09:35:47 +01:00
flume.py [SPARK-14073][STREAMING][TEST-MAVEN] Move flume back to Spark 2016-03-25 17:37:16 -07:00
kafka.py [SPARK-18445][BUILD][DOCS] Fix the markdown for Note:/NOTE:/Note that/'''Note:''' across Scala/Java API documentation 2016-11-19 11:24:15 +00:00
kinesis.py [SPARK-18447][DOCS] Fix the markdown for Note:/NOTE:/Note that across Python API documentation 2016-11-22 11:40:18 +00:00
listener.py [SPARK-6328][PYTHON] Python API for StreamingListener 2015-11-16 11:29:27 -08:00
tests.py [SPARK-18020][STREAMING][KINESIS] Checkpoint SHARD_END to finish reading closed shards 2017-01-25 17:38:48 -08:00
util.py [SPARK-12652][PYSPARK] Upgrade Py4J to 0.9.1 2016-01-12 14:27:05 -08:00