### What changes were proposed in this pull request? Spark 3.0 accidentally dropped R < 3.5. It is built by R 3.6.3 which not support R < 3.5: ``` Error in readRDS(pfile) : cannot read workspace version 3 written by R 3.6.3; need R 3.5.0 or newer version. ``` In fact, with SPARK-31918, we will have to drop R < 3.5 entirely to support R 4.0.0. This is inevitable to release on CRAN because they require to make the tests pass with the latest R. ### Why are the changes needed? To show the supported versions correctly, and support R 4.0.0 to unblock the releases. ### Does this PR introduce _any_ user-facing change? In fact, no because Spark 3.0.0 already does not work with R < 3.5. Compared to Spark 2.4, yes. R < 3.5 would not work. ### How was this patch tested? Jenkins should test it out. Closes #28908 from HyukjinKwon/SPARK-32073. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
3.4 KiB
license |
---|
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
Building SparkR on Windows
To build SparkR on Windows, the following steps are required
-
Make sure
bash
is available and inPATH
if you already have a built-inbash
on Windows. If you do not have, install Cygwin. -
Install R (>= 3.5) and Rtools. Make sure to include Rtools and R in
PATH
. -
Install JDK that SparkR supports (see
R/pkg/DESCRIPTION
), and setJAVA_HOME
in the system environment variables. -
Download and install Maven. Also include the
bin
directory in Maven inPATH
. -
Set
MAVEN_OPTS
as described in Building Spark. -
Open a command shell (
cmd
) in the Spark directory and build Spark with Maven and include the-Psparkr
profile to build the R package. For example to use the default Hadoop versions you can runmvn.cmd -DskipTests -Psparkr package
Note that
.\build\mvn
is a shell script somvn.cmd
on the system should be used directly on Windows. Make sure your Maven version is matched tomaven.version
in./pom.xml
.
Note that it is a workaround for SparkR developers on Windows. Apache Spark does not officially support to build on Windows yet whereas it supports to run on Windows.
Unit tests
To run the SparkR unit tests on Windows, the following steps are required —assuming you are in the Spark root directory and do not have Apache Hadoop installed already:
-
Create a folder to download Hadoop related files for Windows. For example,
cd ..
andmkdir hadoop
. -
Download the relevant Hadoop bin package from steveloughran/winutils. While these are not official ASF artifacts, they are built from the ASF release git hashes by a Hadoop PMC member on a dedicated Windows VM. For further reading, consult Windows Problems on the Hadoop wiki.
-
Install the files into
hadoop\bin
; make sure thatwinutils.exe
andhadoop.dll
are present. -
Set the environment variable
HADOOP_HOME
to the full path to the newly createdhadoop
directory. -
Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under Running R Tests first:
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R