[SPARK-28199][SS][FOLLOWUP] Remove unnecessary annotations for private API

## What changes were proposed in this pull request?

SPARK-28199 (#24996) hid implementations of Triggers into `private[sql]` and encourage end users to use `Trigger.xxx` methods instead.

As I got some post review comment on 7548a8826d (r34366934) we could remove annotations which are meant to be used with public API.

## How was this patch tested?

N/A

Closes #25200 from HeartSaVioR/SPARK-28199-FOLLOWUP.

Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan@gmail.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
This commit is contained in:
Jungtaek Lim (HeartSaVioR) 2019-07-19 08:26:42 -07:00 committed by Dongjoon Hyun
parent 453cbf3dd8
commit 4196d7bd34

View file

@ -21,7 +21,6 @@ import java.util.concurrent.TimeUnit
import scala.concurrent.duration.Duration
import org.apache.spark.annotation.{Evolving, Experimental}
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.unsafe.types.CalendarInterval
@ -47,15 +46,12 @@ private object Triggers {
* A [[Trigger]] that processes only one batch of data in a streaming query then terminates
* the query.
*/
@Experimental
@Evolving
private[sql] case object OneTimeTrigger extends Trigger
/**
* A [[Trigger]] that runs a query periodically based on the processing time. If `interval` is 0,
* the query will run as fast as possible.
*/
@Evolving
private[sql] case class ProcessingTimeTrigger(intervalMs: Long) extends Trigger {
Triggers.validate(intervalMs)
}
@ -84,7 +80,6 @@ private[sql] object ProcessingTimeTrigger {
* A [[Trigger]] that continuously processes streaming data, asynchronously checkpointing at
* the specified interval.
*/
@Evolving
private[sql] case class ContinuousTrigger(intervalMs: Long) extends Trigger {
Triggers.validate(intervalMs)
}