ab197308a7
## What changes were proposed in this pull request? With code changes in https://github.com/apache/spark/pull/21847 , Spark can write out to Avro file as per user provided output schema. To make it more robust and user friendly, we should validate the Avro schema before tasks launched. Also we should support output logical decimal type as BYTES (By default we output as FIXED) ## How was this patch tested? Unit test Closes #22094 from gengliangwang/AvroSerializerMatch. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: DB Tsai <d_tsai@apple.com> |
||
---|---|---|
.. | ||
avro | ||
docker | ||
docker-integration-tests | ||
flume | ||
flume-assembly | ||
flume-sink | ||
kafka-0-8 | ||
kafka-0-8-assembly | ||
kafka-0-10 | ||
kafka-0-10-assembly | ||
kafka-0-10-sql | ||
kinesis-asl | ||
kinesis-asl-assembly | ||
spark-ganglia-lgpl |