spark-instrumented-optimizer/sql/catalyst
Kousuke Saruta 603a7fd7b6 [SPARK-34308][SQL] Escape meta-characters in printSchema
### What changes were proposed in this pull request?

Similar to SPARK-33690, this PR improves the output layout of `printSchema` for the case column names contain meta characters.
Here is an example.

Before:
```
scala> val df1 = spark.sql("SELECT 'aaa\nbbb\tccc\rddd\feee\bfff\u000Bggg\u0007hhh'")
scala> df1.printSchema
root
 |-- aaa
ddd	ccc
   eefff
        ggghhh: string (nullable = false)
```

After:
```
scala> df1.printSchema
root
 |-- aaa\nbbb\tccc\rddd\feee\bfff\vggg\ahhh: string (nullable = false)
```

### Why are the changes needed?

To avoid breaking the layout of `Dataset#printSchema`

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test.

Closes #31412 from sarutak/escape-meta-printSchema.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2021-02-03 11:06:41 +09:00
..
benchmarks [SPARK-30413][SQL] Avoid WrappedArray roundtrip in GenericArrayData constructor, plus related optimization in ParquetMapConverter 2020-01-19 19:12:19 -08:00
src [SPARK-34308][SQL] Escape meta-characters in printSchema 2021-02-03 11:06:41 +09:00
pom.xml [SPARK-33212][BUILD] Upgrade to Hadoop 3.2.2 and move to shaded clients for Hadoop 3.x profile 2021-01-15 14:06:50 -08:00