[SPARK-33804][CORE] Fix compilation warnings about 'view bounds are deprecated'

### What changes were proposed in this pull request?

There are only 3 compilation warnings related to `view bounds are deprecated` in Spark Code:
```
[WARNING] /spark-source/core/src/main/scala/org/apache/spark/rdd/SequenceFileRDDFunctions.scala:35: view bounds are deprecated; use an implicit parameter instead.
[WARNING] /spark-source/core/src/main/scala/org/apache/spark/rdd/SequenceFileRDDFunctions.scala:35: view bounds are deprecated; use an implicit parameter instead.
[WARNING] /spark-source/core/src/main/scala/org/apache/spark/rdd/SequenceFileRDDFunctions.scala:55: view bounds are deprecated; use an implicit parameter instead.
```

This pr try to fix these compilation warnings.

### Why are the changes needed?
Fix compilation warnings about ` view bounds are deprecated`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass the Jenkins or GitHub Action

Closes #30924 from LuciferYang/SPARK-33804.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
This commit is contained in:
yangjie01 2020-12-30 13:57:44 -06:00 committed by Sean Owen
parent f38265ddda
commit 85de644733
2 changed files with 7 additions and 6 deletions

View file

@ -32,16 +32,13 @@ import org.apache.spark.internal.Logging
* @note This can't be part of PairRDDFunctions because we need more implicit parameters to
* convert our keys and values to Writable.
*/
class SequenceFileRDDFunctions[K <% Writable: ClassTag, V <% Writable : ClassTag](
class SequenceFileRDDFunctions[K: IsWritable: ClassTag, V: IsWritable: ClassTag](
self: RDD[(K, V)],
_keyWritableClass: Class[_ <: Writable],
_valueWritableClass: Class[_ <: Writable])
extends Logging
with Serializable {
// TODO the context bound (<%) above should be replaced with simple type bound and implicit
// conversion but is a breaking change. This should be fixed in Spark 3.x.
/**
* Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key
* and value types. If the key or value are Writable, then we use their classes directly;
@ -52,7 +49,7 @@ class SequenceFileRDDFunctions[K <% Writable: ClassTag, V <% Writable : ClassTag
def saveAsSequenceFile(
path: String,
codec: Option[Class[_ <: CompressionCodec]] = None): Unit = self.withScope {
def anyToWritable[U <% Writable](u: U): Writable = u
def anyToWritable[U: IsWritable](u: U): Writable = u
// TODO We cannot force the return type of `anyToWritable` be same as keyWritableClass and
// valueWritableClass at the compile time. To implement that, we need to add type parameters to

View file

@ -17,7 +17,11 @@
package org.apache.spark
import org.apache.hadoop.io.Writable
/**
* Provides several RDD implementations. See [[org.apache.spark.rdd.RDD]].
*/
package object rdd
package object rdd {
type IsWritable[A] = A => Writable
}