[MINOR] [PYSPARK] [EXAMPLES] Changed examples to use SparkSession.sparkContext instead of _sc
## What changes were proposed in this pull request? Some PySpark examples need a SparkContext and get it by accessing _sc directly from the session. These examples should use the provided property `sparkContext` in `SparkSession` instead. ## How was this patch tested? Ran modified examples Author: Bryan Cutler <cutlerb@gmail.com> Closes #13303 from BryanCutler/pyspark-session-sparkContext-MINOR.
This commit is contained in:
parent
698ef762f8
commit
9c297df3d4
|
@ -67,7 +67,7 @@ if __name__ == "__main__":
|
|||
.appName("PythonALS")\
|
||||
.getOrCreate()
|
||||
|
||||
sc = spark._sc
|
||||
sc = spark.sparkContext
|
||||
|
||||
M = int(sys.argv[1]) if len(sys.argv) > 1 else 100
|
||||
U = int(sys.argv[2]) if len(sys.argv) > 2 else 500
|
||||
|
|
|
@ -70,7 +70,7 @@ if __name__ == "__main__":
|
|||
.appName("AvroKeyInputFormat")\
|
||||
.getOrCreate()
|
||||
|
||||
sc = spark._sc
|
||||
sc = spark.sparkContext
|
||||
|
||||
conf = None
|
||||
if len(sys.argv) == 3:
|
||||
|
|
|
@ -53,7 +53,7 @@ if __name__ == "__main__":
|
|||
.appName("ParquetInputFormat")\
|
||||
.getOrCreate()
|
||||
|
||||
sc = spark._sc
|
||||
sc = spark.sparkContext
|
||||
|
||||
parquet_rdd = sc.newAPIHadoopFile(
|
||||
path,
|
||||
|
|
|
@ -32,7 +32,7 @@ if __name__ == "__main__":
|
|||
.appName("PythonPi")\
|
||||
.getOrCreate()
|
||||
|
||||
sc = spark._sc
|
||||
sc = spark.sparkContext
|
||||
|
||||
partitions = int(sys.argv[1]) if len(sys.argv) > 1 else 2
|
||||
n = 100000 * partitions
|
||||
|
|
|
@ -46,7 +46,7 @@ if __name__ == "__main__":
|
|||
.appName("PythonTransitiveClosure")\
|
||||
.getOrCreate()
|
||||
|
||||
sc = spark._sc
|
||||
sc = spark.sparkContext
|
||||
|
||||
partitions = int(sys.argv[1]) if len(sys.argv) > 1 else 2
|
||||
tc = sc.parallelize(generateGraph(), partitions).cache()
|
||||
|
|
Loading…
Reference in a new issue