[SPARK-24228][SQL] Fix Java lint errors

## What changes were proposed in this pull request?
This PR fixes the following Java lint errors due to importing unimport classes

```
$ dev/lint-java
Using `mvn` from path: /usr/bin/mvn
Checkstyle checks failed at following occurrences:
[ERROR] src/main/java/org/apache/spark/sql/sources/v2/reader/partitioning/Distribution.java:[25] (sizes) LineLength: Line is longer than 100 characters (found 109).
[ERROR] src/main/java/org/apache/spark/sql/sources/v2/reader/streaming/ContinuousReader.java:[38] (sizes) LineLength: Line is longer than 100 characters (found 102).
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[21,8] (imports) UnusedImports: Unused import - java.io.ByteArrayInputStream.
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java:[29,8] (imports) UnusedImports: Unused import - org.apache.spark.unsafe.Platform.
[ERROR] src/test/java/test/org/apache/spark/sql/sources/v2/JavaAdvancedDataSourceV2.java:[110] (sizes) LineLength: Line is longer than 100 characters (found 101).
```

With this PR
```
$ dev/lint-java
Using `mvn` from path: /usr/bin/mvn
Checkstyle checks passed.
```

## How was this patch tested?

Existing UTs. Also manually run checkstyles against these two files.

Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>

Closes #21301 from kiszk/SPARK-24228.
This commit is contained in:
Kazuaki Ishizaki 2018-05-14 10:57:10 +08:00 committed by hyukjinkwon
parent 7a2d4895c7
commit b6c50d7820
5 changed files with 6 additions and 6 deletions

View file

@ -18,7 +18,6 @@
package org.apache.spark.sql.execution.datasources.parquet;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;

View file

@ -26,7 +26,6 @@ import org.apache.spark.sql.execution.vectorized.WritableColumnVector;
import org.apache.parquet.column.values.ValuesReader;
import org.apache.parquet.io.api.Binary;
import org.apache.spark.unsafe.Platform;
/**
* An implementation of the Parquet PLAIN decoder that supports the vectorized interface.

View file

@ -22,7 +22,8 @@ import org.apache.spark.sql.sources.v2.reader.InputPartitionReader;
/**
* An interface to represent data distribution requirement, which specifies how the records should
* be distributed among the data partitions(one {@link InputPartitionReader} outputs data for one partition).
* be distributed among the data partitions (one {@link InputPartitionReader} outputs data for one
* partition).
* Note that this interface has nothing to do with the data ordering inside one
* partition(the output records of a single {@link InputPartitionReader}).
*

View file

@ -35,8 +35,8 @@ import java.util.Optional;
@InterfaceStability.Evolving
public interface ContinuousReader extends BaseStreamingSource, DataSourceReader {
/**
* Merge partitioned offsets coming from {@link ContinuousInputPartitionReader} instances for each
* partition to a single global offset.
* Merge partitioned offsets coming from {@link ContinuousInputPartitionReader} instances
* for each partition to a single global offset.
*/
Offset mergeOffsets(PartitionOffset[] offsets);

View file

@ -107,7 +107,8 @@ public class JavaAdvancedDataSourceV2 implements DataSourceV2, ReadSupport {
}
}
static class JavaAdvancedInputPartition implements InputPartition<Row>, InputPartitionReader<Row> {
static class JavaAdvancedInputPartition implements InputPartition<Row>,
InputPartitionReader<Row> {
private int start;
private int end;
private StructType requiredSchema;