[SPARK-31792][SS][DOC][FOLLOW-UP] Rephrase the description for some operations

### What changes were proposed in this pull request?
Rephrase the description for some operations to make it clearer.

### Why are the changes needed?
Add more detail in the document.

### Does this PR introduce _any_ user-facing change?
No, document only.

### How was this patch tested?
Document only.

Closes #29269 from xuanyuanking/SPARK-31792-follow.

Authored-by: Yuanjian Li <yuanjian.li@databricks.com>
Signed-off-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com>
This commit is contained in:
Yuanjian Li 2020-08-22 21:32:23 +09:00 committed by Jungtaek Lim (HeartSaVioR)
parent 12f4331b9e
commit 8b26c69ce7
2 changed files with 6 additions and 7 deletions

View file

@ -426,11 +426,11 @@ queries. Currently, it contains the following metrics.
* **Batch Duration.** The process duration of each batch.
* **Operation Duration.** The amount of time taken to perform various operations in milliseconds.
The tracked operations are listed as follows.
* addBatch: Adds result data of the current batch to the sink.
* getBatch: Gets a new batch of data to process.
* latestOffset: Gets the latest offsets for sources.
* queryPlanning: Generates the execution plan.
* walCommit: Writes the offsets to the metadata log.
* addBatch: Time taken to read the micro-batch's input data from the sources, process it, and write the batch's output to the sink. This should take the bulk of the micro-batch's time.
* getBatch: Time taken to prepare the logical query to read the input of the current micro-batch from the sources.
* latestOffset & getOffset: Time taken to query the maximum available offset for this source.
* queryPlanning: Time taken to generates the execution plan.
* walCommit: Time taken to write the offsets to the metadata log.
As an early-release version, the statistics page is still under development and will be improved in
future releases.

View file

@ -566,8 +566,7 @@ class MicroBatchExecution(
val nextBatch =
new Dataset(lastExecution, RowEncoder(lastExecution.analyzed.schema))
val batchSinkProgress: Option[StreamWriterCommitProgress] =
reportTimeTaken("addBatch") {
val batchSinkProgress: Option[StreamWriterCommitProgress] = reportTimeTaken("addBatch") {
SQLExecution.withNewExecutionId(lastExecution) {
sink match {
case s: Sink => s.addBatch(currentBatchId, nextBatch)