[SPARK-28972][DOCS] Updating unit description in configurations, to maintain consistency

### What changes were proposed in this pull request?
Updating unit description in configurations, inorder to maintain consistency across configurations.

### Why are the changes needed?
the description does not mention about suffix that can be mentioned while configuring this value.
For better user understanding

### Does this PR introduce any user-facing change?
yes. Doc description

### How was this patch tested?
generated document and checked.
![Screenshot from 2019-09-05 11-09-17](https://user-images.githubusercontent.com/51401130/64314853-07a55880-cfce-11e9-8af0-6416a50b0188.png)

Closes #25689 from PavithraRamachandran/heapsize_config.

Authored-by: Pavithra Ramachandran <pavi.rams@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
This commit is contained in:
Pavithra Ramachandran 2019-09-18 09:11:15 -05:00 committed by Sean Owen
parent b48ef7a9fb
commit 600a2a4ae5
2 changed files with 12 additions and 9 deletions

View file

@ -243,7 +243,8 @@ package object config {
.createWithDefault(false) .createWithDefault(false)
private[spark] val MEMORY_OFFHEAP_SIZE = ConfigBuilder("spark.memory.offHeap.size") private[spark] val MEMORY_OFFHEAP_SIZE = ConfigBuilder("spark.memory.offHeap.size")
.doc("The absolute amount of memory in bytes which can be used for off-heap allocation. " + .doc("The absolute amount of memory which can be used for off-heap allocation, " +
" in bytes unless otherwise specified. " +
"This setting has no impact on heap memory usage, so if your executors' total memory " + "This setting has no impact on heap memory usage, so if your executors' total memory " +
"consumption must fit within some hard limit then be sure to shrink your JVM heap size " + "consumption must fit within some hard limit then be sure to shrink your JVM heap size " +
"accordingly. This must be set to a positive value when spark.memory.offHeap.enabled=true.") "accordingly. This must be set to a positive value when spark.memory.offHeap.enabled=true.")

View file

@ -866,7 +866,7 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.shuffle.service.index.cache.size</code></td> <td><code>spark.shuffle.service.index.cache.size</code></td>
<td>100m</td> <td>100m</td>
<td> <td>
Cache entries limited to the specified memory footprint in bytes. Cache entries limited to the specified memory footprint, in bytes unless otherwise specified.
</td> </td>
</tr> </tr>
<tr> <tr>
@ -1207,16 +1207,18 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.io.compression.lz4.blockSize</code></td> <td><code>spark.io.compression.lz4.blockSize</code></td>
<td>32k</td> <td>32k</td>
<td> <td>
Block size in bytes used in LZ4 compression, in the case when LZ4 compression codec Block size used in LZ4 compression, in the case when LZ4 compression codec
is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used. is used. Lowering this block size will also lower shuffle memory usage when LZ4 is used.
Default unit is bytes, unless otherwise specified.
</td> </td>
</tr> </tr>
<tr> <tr>
<td><code>spark.io.compression.snappy.blockSize</code></td> <td><code>spark.io.compression.snappy.blockSize</code></td>
<td>32k</td> <td>32k</td>
<td> <td>
Block size in bytes used in Snappy compression, in the case when Snappy compression codec Block size in Snappy compression, in the case when Snappy compression codec is used.
is used. Lowering this block size will also lower shuffle memory usage when Snappy is used. Lowering this block size will also lower shuffle memory usage when Snappy is used.
Default unit is bytes, unless otherwise specified.
</td> </td>
</tr> </tr>
<tr> <tr>
@ -1384,7 +1386,7 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.memory.offHeap.size</code></td> <td><code>spark.memory.offHeap.size</code></td>
<td>0</td> <td>0</td>
<td> <td>
The absolute amount of memory in bytes which can be used for off-heap allocation. The absolute amount of memory which can be used for off-heap allocation, in bytes unless otherwise specified.
This setting has no impact on heap memory usage, so if your executors' total memory consumption This setting has no impact on heap memory usage, so if your executors' total memory consumption
must fit within some hard limit then be sure to shrink your JVM heap size accordingly. must fit within some hard limit then be sure to shrink your JVM heap size accordingly.
This must be set to a positive value when <code>spark.memory.offHeap.enabled=true</code>. This must be set to a positive value when <code>spark.memory.offHeap.enabled=true</code>.
@ -1568,9 +1570,9 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.storage.memoryMapThreshold</code></td> <td><code>spark.storage.memoryMapThreshold</code></td>
<td>2m</td> <td>2m</td>
<td> <td>
Size in bytes of a block above which Spark memory maps when reading a block from disk. Size of a block above which Spark memory maps when reading a block from disk. Default unit is bytes,
This prevents Spark from memory mapping very small blocks. In general, memory unless specified otherwise. This prevents Spark from memory mapping very small blocks. In general,
mapping has high overhead for blocks close to or below the page size of the operating system. memory mapping has high overhead for blocks close to or below the page size of the operating system.
</td> </td>
</tr> </tr>
<tr> <tr>