Apache Spark - A unified analytics engine for large-scale data processing
Go to file
Cheng Hao 3ba5aaab82 [SPARK-5213] [SQL] Pluggable SQL Parser Support
This PR aims to make the SQL Parser Pluggable, and user can register it's own parser via Spark SQL CLI.

```
# add the jar into the classpath
$hchengmydesktop:spark>bin/spark-sql --jars sql99.jar

-- switch to "hiveql" dialect
   spark-sql>SET spark.sql.dialect=hiveql;
   spark-sql>SELECT * FROM src LIMIT 1;

-- switch to "sql" dialect
   spark-sql>SET spark.sql.dialect=sql;
   spark-sql>SELECT * FROM src LIMIT 1;

-- switch to a custom dialect
   spark-sql>SET spark.sql.dialect=com.xxx.xxx.SQL99Dialect;
   spark-sql>SELECT * FROM src LIMIT 1;

-- register the non-exist SQL dialect
   spark-sql> SET spark.sql.dialect=NotExistedClass;
   spark-sql> SELECT * FROM src LIMIT 1;
-- Exception will be thrown and switch to default sql dialect ("sql" for SQLContext and "hiveql" for HiveContext)
```

Author: Cheng Hao <hao.cheng@intel.com>

Closes #4015 from chenghao-intel/sqlparser and squashes the following commits:

493775c [Cheng Hao] update the code as feedback
81a731f [Cheng Hao] remove the unecessary comment
aab0b0b [Cheng Hao] polish the code a little bit
49b9d81 [Cheng Hao] shrink the comment for rebasing
2015-04-30 18:49:06 -07:00
assembly [SPARK-7168] [BUILD] Update plugin versions in Maven build and centralize versions 2015-04-28 07:48:34 -04:00
bagel [SPARK-6758]block the right jetty package in log 2015-04-09 17:44:08 -04:00
bin [SPARK-6435] spark-shell --jars option does not add all jars to classpath 2015-04-28 07:56:36 -04:00
build SPARK-5856: In Maven build script, launch Zinc with more memory 2015-02-17 10:10:01 -08:00
conf [SPARK-4286] Add an external shuffle service that can be run as a daemon. 2015-04-28 12:08:18 -07:00
core Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS" 2015-04-30 14:59:20 -07:00
data/mllib [SPARK-5939][MLLib] make FPGrowth example app take parameters 2015-02-23 08:47:28 -08:00
dev [SPARK-4925] Publish Spark SQL hive-thriftserver maven artifact 2015-04-27 11:27:56 -07:00
docker [SPARK-1342] Scala 2.10.4 2014-04-01 18:35:50 -07:00
docs Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS" 2015-04-30 14:59:20 -07:00
ec2 [SPARK-4897] [PySpark] Python 3 support 2015-04-16 16:20:57 -07:00
examples [SPARK-7176] [ML] Add validation functionality to Param 2015-04-29 17:26:46 -07:00
external [SPARK-7056] [STREAMING] Make the Write Ahead Log pluggable 2015-04-29 13:06:11 -07:00
extras [SPARK-6440][CORE]Handle IPv6 addresses properly when constructing URI 2015-04-13 12:55:25 +01:00
graphx SPARK-6710 GraphX Fixed Wrong initial bias in GraphX SVDPlusPlus 2015-04-11 21:01:23 -07:00
launcher Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS" 2015-04-30 14:59:20 -07:00
mllib [SPARK-7279] Removed diffSum which is theoretical zero in LinearRegression and coding formating 2015-04-30 16:26:51 -07:00
network [SPARK-5932] [CORE] Use consistent naming for size properties 2015-04-28 12:18:55 -07:00
project [Build] Enable MiMa checks for SQL 2015-04-30 16:23:01 -07:00
python [SPARK-7156][SQL] Addressed follow up comments for randomSplit 2015-04-29 19:13:47 -07:00
R [SPARK-6991] [SPARKR] Adds support for zipPartitions. 2015-04-27 15:04:37 -07:00
repl [SPARK-7092] Update spark scala version to 2.11.6 2015-04-25 18:07:34 -04:00
sbin [SPARK-5338] [MESOS] Add cluster mode support for Mesos 2015-04-28 13:33:57 -07:00
sbt Adde LICENSE Header to build/mvn, build/sbt and sbt/sbt 2014-12-29 10:48:53 -08:00
sql [SPARK-5213] [SQL] Pluggable SQL Parser Support 2015-04-30 18:49:06 -07:00
streaming [SPARK-6862] [STREAMING] [WEBUI] Add BatchPage to display details of a batch 2015-04-29 18:22:14 -07:00
tools [SPARK-6428] Turn on explicit type checking for public methods. 2015-04-03 01:25:02 -07:00
unsafe [SPARK-7288] Suppress compiler warnings due to use of sun.misc.Unsafe; add facade in front of Unsafe; remove use of Unsafe.setMemory 2015-04-30 15:21:00 -07:00
yarn Revert "[SPARK-5342] [YARN] Allow long running Spark apps to run on secure YARN/HDFS" 2015-04-30 14:59:20 -07:00
.gitattributes [SPARK-3870] EOL character enforcement 2014-10-31 12:39:52 -07:00
.gitignore [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
.rat-excludes [SPARK-5654] Integrate SparkR 2015-04-08 22:45:40 -07:00
CONTRIBUTING.md [SPARK-6889] [DOCS] CONTRIBUTING.md updates to accompany contribution doc updates 2015-04-21 22:34:31 -07:00
LICENSE [SPARK-1406] Mllib pmml model export 2015-04-29 23:21:21 -07:00
make-distribution.sh [SPARK-6122] [CORE] Upgrade tachyon-client version to 0.6.3 2015-04-24 17:57:41 -04:00
NOTICE SPARK-1827. LICENSE and NOTICE files need a refresh to contain transitive dependency info 2014-05-14 09:38:33 -07:00
pom.xml [SPARK-7076][SPARK-7077][SPARK-7080][SQL] Use managed memory for aggregations 2015-04-29 01:07:26 -07:00
README.md [docs] [SPARK-6306] Readme points to dead link 2015-03-12 15:01:33 +00:00
scalastyle-config.xml [SPARK-6428] Turn on explicit type checking for public methods. 2015-04-03 01:25:02 -07:00
tox.ini [SPARK-3073] [PySpark] use external sort in sortBy() and sortByKey() 2014-08-26 16:57:40 -07:00

Apache Spark

Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

http://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page and project wiki. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.) More detailed documentation is available from the project site, at "Building Spark".

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1000:

scala> sc.parallelize(1 to 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1000:

>>> sc.parallelize(range(1000)).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run all automated tests.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions. See also "Third Party Hadoop Distributions" for guidance on building a Spark application that works with a particular distribution.

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.