[SPARK-14874][SQL][STREAMING] Remove the obsolete Batch representation

## What changes were proposed in this pull request?

The `Batch` class, which had been used to indicate progress in a stream, was abandoned by [[SPARK-13985][SQL] Deterministic batches with ids](caea152145) and then became useless.

This patch:
- removes the `Batch` class
- ~~does some related renaming~~ (update: this has been reverted)
- fixes some related comments

## How was this patch tested?

N/A

Author: Liwei Lin <lwlin7@gmail.com>

Closes #12638 from lw-lin/remove-batch.
This commit is contained in:
Liwei Lin 2016-04-27 10:25:33 -07:00 committed by Michael Armbrust
parent 7dd01d9c01
commit a234cc6146
5 changed files with 4 additions and 30 deletions

View file

@ -1,26 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.execution.streaming
import org.apache.spark.sql.DataFrame
/**
* Used to pass a batch of data through a streaming query execution along with an indication
* of progress in the stream.
*/
class Batch(val end: Offset, val data: DataFrame)

View file

@ -88,7 +88,7 @@ class FileStreamSource(
}
/**
* Returns the next batch of data that is available after `start`, if any is available.
* Returns the data that is between the offsets (`start`, `end`].
*/
override def getBatch(start: Option[Offset], end: Offset): DataFrame = {
val startId = start.map(_.asInstanceOf[LongOffset].offset).getOrElse(-1L)

View file

@ -91,7 +91,7 @@ case class MemoryStream[A : Encoder](id: Int, sqlContext: SQLContext)
}
/**
* Returns the next batch of data that is available after `start`, if any is available.
* Returns the data that is between the offsets (`start`, `end`].
*/
override def getBatch(start: Option[Offset], end: Offset): DataFrame = {
val startOrdinal =