spark-instrumented-optimizer/dev/requirements.txt
itholic b8740a1d1e [SPARK-35499][PYTHON] Apply black to pandas API on Spark codes
### What changes were proposed in this pull request?

This PR proposes applying `black` to pandas API on Spark codes, for improving static analysis.

By executing the `./dev/reformat-python` in the spark home directory, all the code of the pandas API on Spark is fixed according to the static analysis rules.

### Why are the changes needed?

This can be reduces the cost of static analysis during development.

It has been used continuously for about a year in the Koalas project and its convenience has been proven.

### Does this PR introduce _any_ user-facing change?

No, it's dev-only.

### How was this patch tested?

Manually reformat the pandas API on Spark codes by running the `./dev/reformat-python`, and checked the `./dev/lint-python` is passed.

Closes #32779 from itholic/SPARK-35499.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-06-06 17:30:07 -07:00

38 lines
427 B
Plaintext

# PySpark dependencies (required)
py4j
# PySpark dependencies (optional)
numpy
pyarrow
pandas
scipy
plotly
mlflow
matplotlib<3.3.0
# PySpark test dependencies
xmlrunner
# Linter
mypy
flake8
# Documentation (SQL)
mkdocs
# Documentation (Python)
pydata_sphinx_theme
ipython
nbsphinx
numpydoc
jinja2<3.0.0
sphinx<3.1.0
sphinx-plotly-directive
# Development scripts
jira
PyGithub
# pandas API on Spark Code formatter.
black