87ffe7addd
## What changes were proposed in this pull request?
Note that this PR was made based on the top of https://github.com/apache/spark/pull/20151. So, it almost leaves the main codes intact.
This PR proposes to add a script for the preparation of automatic PySpark coverage generation. Now, it's difficult to check the actual coverage in case of PySpark. With this script, it allows to run tests by the way we did via `run-tests` script before. The usage is exactly the same with `run-tests` script as this basically wraps it.
This script and PR alone should also be useful. I was asked about how to run this before, and seems some reviewers (including me) need this. It would be also useful to run it manually.
It usually requires a small diff in normal Python projects but PySpark cases are a bit different because apparently we are unable to track the coverage after it's forked. So, here, I made a custom worker that forces the coverage, based on the top of https://github.com/apache/spark/pull/20151.
I made a simple demo. Please take a look - https://spark-test.github.io/pyspark-coverage-site.
To show up the structure, this PR adds the files as below:
```
python
├── .coveragerc # Runtime configuration when we run the script.
├── run-tests-with-coverage # The script that has coverage support and wraps run-tests script.
└── test_coverage # Directories that have files required when running coverage.
├── conf
│ └── spark-defaults.conf # Having the configuration 'spark.python.daemon.module'.
├── coverage_daemon.py # A daemon having custom fix and wrapping our daemon.py
└── sitecustomize.py # Initiate coverage with COVERAGE_PROCESS_START
```
Note that this PR has a minor nit:
[This scope](
|
||
---|---|---|
.. | ||
spark-defaults.conf |