diff --git a/docs/quick-start.md b/docs/quick-start.md index 1b961fd455..a29e28faf2 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -40,7 +40,7 @@ RDDs have _[actions](programming-guide.html#actions)_, which return values, and {% highlight scala %} scala> textFile.count() // Number of items in this RDD -res0: Long = 126 +res0: Long = 126 // May be different from yours as README.md will change over time, similar to other outputs scala> textFile.first() // First item in this RDD res1: String = # Apache Spark @@ -184,10 +184,10 @@ scala> linesWithSpark.cache() res7: linesWithSpark.type = MapPartitionsRDD[2] at filter at :27 scala> linesWithSpark.count() -res8: Long = 19 +res8: Long = 15 scala> linesWithSpark.count() -res9: Long = 19 +res9: Long = 15 {% endhighlight %} It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is @@ -202,10 +202,10 @@ a cluster, as described in the [programming guide](programming-guide.html#initia >>> linesWithSpark.cache() >>> linesWithSpark.count() -19 +15 >>> linesWithSpark.count() -19 +15 {% endhighlight %} It may seem silly to use Spark to explore and cache a 100-line text file. The interesting part is