Merge pull request #98 from aarondav/docs

Docs: Fix links to RDD API documentation
This commit is contained in:
Matei Zaharia 2013-10-22 10:30:02 -07:00
commit aa9019fc82

View file

@ -142,7 +142,7 @@ All transformations in Spark are <i>lazy</i>, in that they do not compute their
By default, each transformed RDD is recomputed each time you run an action on it. However, you may also *persist* an RDD in memory using the `persist` (or `cache`) method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it. There is also support for persisting datasets on disk, or replicated across the cluster. The next section in this document describes these options.
The following tables list the transformations and actions currently supported (see also the [RDD API doc](api/core/index.html#org.apache.spark.RDD) for details):
The following tables list the transformations and actions currently supported (see also the [RDD API doc](api/core/index.html#org.apache.spark.rdd.RDD) for details):
### Transformations
@ -211,7 +211,7 @@ The following tables list the transformations and actions currently supported (s
</tr>
</table>
A complete list of transformations is available in the [RDD API doc](api/core/index.html#org.apache.spark.RDD).
A complete list of transformations is available in the [RDD API doc](api/core/index.html#org.apache.spark.rdd.RDD).
### Actions
@ -259,7 +259,7 @@ A complete list of transformations is available in the [RDD API doc](api/core/in
</tr>
</table>
A complete list of actions is available in the [RDD API doc](api/core/index.html#org.apache.spark.RDD).
A complete list of actions is available in the [RDD API doc](api/core/index.html#org.apache.spark.rdd.RDD).
## RDD Persistence