spark-instrumented-optimizer/dev
Dongjoon Hyun c5443560b7 [MINOR][BUILD] Enable RAT checking on LZ4BlockInputStream.java.
## What changes were proposed in this pull request?

Since `LZ4BlockInputStream.java` is not licensed to Apache Software Foundation (ASF), the Apache License header of that file is not monitored until now.
This PR aims to enable RAT checking on `LZ4BlockInputStream.java` by excluding from `dev/.rat-excludes`.
This will prevent accidental removal of Apache License header from that file.

## How was this patch tested?

Pass the Jenkins tests (Specifically, RAT check stage).

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #12677 from dongjoon-hyun/minor_rat_exclusion_file.
2016-04-27 09:15:06 +01:00
..
audit-release [SPARK-14721][SQL] Remove HiveContext (part 2) 2016-04-25 13:23:05 -07:00
create-release [SPARK-13596][BUILD] Move misc top-level build files into appropriate subdirs 2016-03-07 14:48:02 -08:00
deps [SPARK-14787][SQL] Upgrade Joda-Time library from 2.9 to 2.9.3 2016-04-21 11:32:27 +01:00
sparktestsupport [SPARK-14807] Create a compatibility module 2016-04-22 17:50:24 -07:00
tests [SPARK-10359] Enumerate dependencies in a file and diff against it for new pull requests 2015-12-30 12:47:42 -08:00
.gitignore [SPARK-6219] Reuse pep8.py 2015-04-18 16:46:28 -07:00
.rat-excludes [MINOR][BUILD] Enable RAT checking on LZ4BlockInputStream.java. 2016-04-27 09:15:06 +01:00
change-scala-version.sh [SPARK-9250] Make change-scala-version more helpful w.r.t. valid Scala versions 2015-07-24 17:09:33 +01:00
change-version-to-2.10.sh [SPARK-9304] [BUILD] Improve backwards compatibility of SPARK-8401 2015-07-25 11:05:08 +01:00
change-version-to-2.11.sh [SPARK-9304] [BUILD] Improve backwards compatibility of SPARK-8401 2015-07-25 11:05:08 +01:00
check-license [SPARK-13596][BUILD] Move misc top-level build files into appropriate subdirs 2016-03-07 14:48:02 -08:00
checkstyle-suppressions.xml [SPARK-14011][CORE][SQL] Enable LineLength Java checkstyle rule 2016-03-21 07:58:57 +00:00
checkstyle.xml [SPARK-14868][BUILD] Enable NewLineAtEofChecker in checkstyle and fix lint-java errors 2016-04-24 20:40:03 -07:00
github_jira_sync.py Fix install jira-python 2015-05-23 09:14:07 -07:00
lint-java [SPARK-6990][BUILD] Add Java linting script; fix minor warnings 2015-12-04 12:03:45 -08:00
lint-python [SPARK-13887][PYTHON][TRIVIAL][BUILD] Make lint-python script fail fast 2016-03-25 12:53:34 +00:00
lint-r [SPARK-10328] [SPARKR] Fix generic for na.omit 2015-08-28 00:37:50 -07:00
lint-r.R [SPARK-14074][SPARKR] Specify commit sha1 ID when using install_github to install intr package. 2016-03-23 07:57:03 -07:00
lint-scala [SPARK-2627] [PySpark] have the build enforce PEP 8 automatically 2014-08-06 12:58:24 -07:00
make-distribution.sh [SPARK-13579][BUILD] Stop building the main Spark assembly. 2016-04-04 16:52:22 -07:00
merge_spark_pr.py [SPARK-9383][PROJECT-INFRA] PR merge script should reset back to previous branch when possible 2016-01-13 11:56:30 -08:00
mima [SPARK-13579][BUILD] Stop building the main Spark assembly. 2016-04-04 16:52:22 -07:00
README.md Merge pull request #565 from pwendell/dev-scripts. Closes #565. 2014-02-08 23:13:34 -08:00
requirements.txt [SPARK-10498][TOOLS][BUILD] Add requirements.txt file for dev python tools 2016-01-24 11:48:28 -08:00
run-tests [SPARK-5161] Parallelize Python test execution 2015-06-29 21:32:40 -07:00
run-tests-jenkins [SPARK-7018][BUILD] Refactor dev/run-tests-jenkins into Python 2015-10-18 22:45:27 -07:00
run-tests-jenkins.py [SPARK-12842][TEST-HADOOP2.7] Add Hadoop 2.7 build profile 2016-01-15 17:07:24 -08:00
run-tests.py [SPARK-14807] Create a compatibility module 2016-04-22 17:50:24 -07:00
scalastyle [SPARK-12152][PROJECT-INFRA] Speed up Scalastyle checks by only invoking SBT once 2015-12-06 17:35:01 -08:00
test-dependencies.sh [SPARK-12842][TEST-HADOOP2.7] Add Hadoop 2.7 build profile 2016-01-15 17:07:24 -08:00
tox.ini [SPARK-13596][BUILD] Move misc top-level build files into appropriate subdirs 2016-03-07 14:48:02 -08:00

Spark Developer Scripts

This directory contains scripts useful to developers when packaging, testing, or committing to Spark.

Many of these scripts require Apache credentials to work correctly.