[MINOR][DOCS] Fix typos in configuration.md
and hardware-provisioning.md
## What changes were proposed in this pull request? This PR fixes some typos in the following documentation files. * `NOTICE`, `configuration.md`, and `hardware-provisioning.md`. ## How was the this patch tested? manual tests Author: Dongjoon Hyun <dongjoonapache.org> Author: Dongjoon Hyun <dongjoon@apache.org> Closes #11289 from dongjoon-hyun/minor_fix_typos_notice_and_confdoc.
This commit is contained in:
parent
6c3832b26e
commit
03e62aa3f6
|
@ -249,7 +249,7 @@ Apart from these, the following properties are also available, and may be useful
|
|||
<td>false</td>
|
||||
<td>
|
||||
(Experimental) Whether to give user-added jars precedence over Spark's own jars when loading
|
||||
classes in the the driver. This feature can be used to mitigate conflicts between Spark's
|
||||
classes in the driver. This feature can be used to mitigate conflicts between Spark's
|
||||
dependencies and user dependencies. It is currently an experimental feature.
|
||||
|
||||
This is used in cluster mode only.
|
||||
|
@ -373,7 +373,7 @@ Apart from these, the following properties are also available, and may be useful
|
|||
<td>
|
||||
Reuse Python worker or not. If yes, it will use a fixed number of Python workers,
|
||||
does not need to fork() a Python process for every tasks. It will be very useful
|
||||
if there is large broadcast, then the broadcast will not be needed to transfered
|
||||
if there is large broadcast, then the broadcast will not be needed to transferred
|
||||
from JVM to Python worker for every task.
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -1266,7 +1266,7 @@ Apart from these, the following properties are also available, and may be useful
|
|||
<td>
|
||||
Comma separated list of users/administrators that have view and modify access to all Spark jobs.
|
||||
This can be used if you run on a shared cluster and have a set of administrators or devs who
|
||||
help debug when things work. Putting a "*" in the list means any user can have the priviledge
|
||||
help debug when things work. Putting a "*" in the list means any user can have the privilege
|
||||
of admin.
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -1604,7 +1604,7 @@ Apart from these, the following properties are also available, and may be useful
|
|||
#### Deploy
|
||||
|
||||
<table class="table">
|
||||
<tr><th>Property Name</th><th>Default</th><th>Meaniing</th></tr>
|
||||
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
|
||||
<tr>
|
||||
<td><code>spark.deploy.recoveryMode</code></td>
|
||||
<td>NONE</td>
|
||||
|
@ -1693,7 +1693,7 @@ Spark uses [log4j](http://logging.apache.org/log4j/) for logging. You can config
|
|||
# Overriding configuration directory
|
||||
|
||||
To specify a different configuration directory other than the default "SPARK_HOME/conf",
|
||||
you can set SPARK_CONF_DIR. Spark will use the the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc)
|
||||
you can set SPARK_CONF_DIR. Spark will use the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc)
|
||||
from this directory.
|
||||
|
||||
# Inheriting Hadoop Cluster Configuration
|
||||
|
|
|
@ -63,7 +63,7 @@ from the application's monitoring UI (`http://<driver-node>:4040`).
|
|||
|
||||
# CPU Cores
|
||||
|
||||
Spark scales well to tens of CPU cores per machine because it performes minimal sharing between
|
||||
Spark scales well to tens of CPU cores per machine because it performs minimal sharing between
|
||||
threads. You should likely provision at least **8-16 cores** per machine. Depending on the CPU
|
||||
cost of your workload, you may also need more: once data is in memory, most applications are
|
||||
either CPU- or network-bound.
|
||||
|
|
Loading…
Reference in a new issue