Skip to content
Merged

sync #21

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
161cf2a
[SPARK-32024][WEBUI][FOLLOWUP] Quick fix on test failure on missing w…
HeartSaVioR Jul 9, 2020
cfecc20
[SPARK-32160][CORE][PYSPARK] Disallow to create SparkContext in execu…
ueshin Jul 9, 2020
8c5bee5
[SPARK-28067][SPARK-32018] Fix decimal overflow issues
cloud-fan Jul 9, 2020
09cc6c5
[SPARK-32193][SQL][DOCS] Update regexp usage in SQL docs
GuoPhilipse Jul 9, 2020
526cb2d
[SPARK-32148][SS] Fix stream-stream join issue on missing to copy reu…
HeartSaVioR Jul 9, 2020
c5bd073
[SPARK-32231][R][INFRA] Use Hadoop 3.2 winutils in AppVeyor build
dongjoon-hyun Jul 9, 2020
1cb5bfc
[SPARK-32159][SQL] Fix integration between Aggregator[Array[_], _, _]…
erikerlandson Jul 9, 2020
523e238
[SPARK-32192][SQL] Print column name when throws ClassCastException
Jul 9, 2020
9331a5c
[SPARK-32035][DOCS][EXAMPLES] Fixed typos involving AWS Access, Secre…
kaxio Jul 9, 2020
ac6406e
[SPARK-31831][SQL] HiveSessionImplSuite flakiness fix via mocking ins…
HeartSaVioR Jul 9, 2020
18aae21
[SPARK-31875][SQL] Provide a option to disable user supplied Hints
dilipbiswal Jul 10, 2020
01e9dd9
[SPARK-20680][SQL][FOLLOW-UP] Revert NullType.simpleString from 'unkn…
HyukjinKwon Jul 10, 2020
4609f1f
[SPARK-32207][SQL] Support 'F'-suffixed Float Literals
yaooqinn Jul 10, 2020
e6e43cb
[SPARK-32242][SQL] CliSuite flakiness fix via differentiating cli dri…
HeartSaVioR Jul 10, 2020
c8779d9
[SPARK-32256][SQL][TEST-HADOOP2.7] Force to initialize Hadoop Version…
zsxwing Jul 10, 2020
578b90c
[SPARK-32091][CORE] Ignore timeout error when remove blocks on the lo…
Ngone51 Jul 10, 2020
560fe1f
[SPARK-32220][SQL] SHUFFLE_REPLICATE_NL Hint should not change Non-Ca…
AngersZhuuuu Jul 10, 2020
500877e
[SPARK-32133][SQL] Forbid time field steps for date start/end in Sequ…
TJX2014 Jul 10, 2020
d7d5bdf
[SPARK-32103][CORE] Support IPv6 host/port in core module
PavithraRamachandran Jul 10, 2020
84db660
[SPARK-32251][SQL][DOCS][TESTS] Fix SQL keyword document
cloud-fan Jul 10, 2020
0c9196e
[SPARK-32238][SQL] Use Utils.getSimpleName to avoid hitting Malformed…
Ngone51 Jul 11, 2020
1b3fc9a
[SPARK-32149][SHUFFLE] Improve file path name normalisation at block …
attilapiros Jul 11, 2020
22f9dfb
[SPARK-32173][SQL] Deduplicate code in FromUTCTimestamp and ToUTCTime…
MaxGekk Jul 11, 2020
99b4b06
[SPARK-32232][ML][PYSPARK] Make sure ML has the same default solver v…
huaxingao Jul 11, 2020
10a65ee
[SPARK-32150][BUILD] Upgrade to ZStd 1.4.5-4
williamhyun Jul 11, 2020
b84ed41
[SPARK-32245][INFRA] Run Spark tests in Github Actions
HyukjinKwon Jul 11, 2020
ceaa392
[SPARK-32200][WEBUI] Redirect to the history page when accessed to /h…
sarutak Jul 11, 2020
3ad4863
[SPARK-29292][SPARK-30010][CORE] Let core compile for Scala 2.13
srowen Jul 11, 2020
09789ff
[SPARK-31226][CORE][TESTS] SizeBasedCoalesce logic will lose partition
AngersZhuuuu Jul 11, 2020
98504e9
[SPARK-29358][SQL] Make unionByName optionally fill missing columns w…
viirya Jul 11, 2020
004aea8
[SPARK-32154][SQL] Use ExpressionEncoder for the return type of Scala…
Ngone51 Jul 12, 2020
6ae400c
[MINOR][SQL][DOCS] consistency in argument naming for time functions
Jul 12, 2020
c56c84a
[MINOR][DOCS] Fix typo in PySpark example in ml-datasource.md
ChuliangXiao Jul 12, 2020
c4b0639
[SPARK-32270][SQL] Use TextFileFormat in CSV's schema inference with …
HyukjinKwon Jul 12, 2020
ad90cbf
[SPARK-31831][SQL][TESTS] Use subclasses for mock in HiveSessionImplS…
Jul 12, 2020
bc3d4ba
[SPARK-32245][INFRA][FOLLOWUP] Reenable Github Actions on commit
dongjoon-hyun Jul 12, 2020
b6229df
[SPARK-32258][SQL] NormalizeFloatingNumbers directly normalizes IF/Ca…
viirya Jul 12, 2020
6d49964
[SPARK-32105][SQL] Refactor current ScriptTransformationExec code
AngersZhuuuu Jul 13, 2020
5521afb
[SPARK-32220][SQL][FOLLOW-UP] SHUFFLE_REPLICATE_NL Hint should not ch…
AngersZhuuuu Jul 13, 2020
27ef362
[SPARK-32292][SPARK-32252][INFRA] Run the relevant tests only in GitH…
HyukjinKwon Jul 13, 2020
90ac9f9
[SPARK-32004][ALL] Drop references to slave
holdenk Jul 13, 2020
4ad9bfd
[SPARK-32138] Drop Python 2.7, 3.4 and 3.5
HyukjinKwon Jul 14, 2020
24be816
[SPARK-32241][SQL] Remove empty children of union
peter-toth Jul 14, 2020
cc9371d
[SPARK-32258][SQL] Not duplicate normalization on children for float/…
viirya Jul 14, 2020
d6a68e0
[SPARK-29292][STREAMING][SQL][BUILD] Get streaming, catalyst, sql com…
srowen Jul 14, 2020
a47b69a
[SPARK-32307][SQL] ScalaUDF's canonicalized expression should exclude…
Ngone51 Jul 14, 2020
2a0faca
[SPARK-32309][PYSPARK] Import missing sys import
Fokko Jul 14, 2020
5e0cb3e
[SPARK-32305][BUILD] Make `mvn clean` remove `metastore_db` and `spar…
LuciferYang Jul 14, 2020
c602d79
[SPARK-32311][PYSPARK][TESTS] Remove duplicate import
Fokko Jul 14, 2020
90b0c26
[SPARK-31608][CORE][WEBUI] Add a new type of KVStore to make loading …
Jul 14, 2020
902e134
[SPARK-32303][PYTHON][TESTS] Remove leftover from editable mode insta…
HyukjinKwon Jul 14, 2020
676d92e
[SPARK-32301][PYTHON][TESTS] Add a test case for toPandas to work wit…
HyukjinKwon Jul 14, 2020
03b5707
[MINOR][R] Match collectAsArrowToR with non-streaming collectAsArrowT…
HyukjinKwon Jul 14, 2020
6bdd710
[SPARK-32316][TESTS][INFRA] Test PySpark with Python 3.8 in Github Ac…
HyukjinKwon Jul 15, 2020
af8e65f
[SPARK-32276][SQL] Remove redundant sorts before repartition nodes
aokolnychyi Jul 15, 2020
542aefb
[SPARK-31985][SS] Remove incomplete/undocumented stateful aggregation…
HeartSaVioR Jul 15, 2020
2527fbc
Revert "[SPARK-32276][SQL] Remove redundant sorts before repartition …
dongjoon-hyun Jul 15, 2020
e449993
[SPARK-31480][SQL] Improve the EXPLAIN FORMATTED's output for DSV2's …
dilipbiswal Jul 15, 2020
8950dcb
[SPARK-32318][SQL][TESTS] Add a test case to EliminateSortsSuite for …
dongjoon-hyun Jul 15, 2020
cf22d94
[SPARK-32036] Replace references to blacklist/whitelist language with…
xkrogen Jul 15, 2020
b05f309
[SPARK-32140][ML][PYSPARK] Add training summary to FMClassificationModel
huaxingao Jul 15, 2020
c28a6fa
[SPARK-29292][SQL][ML] Update rest of default modules (Hive, ML, etc)…
srowen Jul 15, 2020
db47c6e
[SPARK-32125][UI] Support get taskList by status in Web UI and SHS Re…
Jul 16, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
289 changes: 186 additions & 103 deletions .github/workflows/master.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,148 +9,231 @@ on:
- master

jobs:
# TODO(SPARK-32248): Recover JDK 11 builds
# Build: build Spark and run the tests for specified modules.
build:

name: "Build modules: ${{ matrix.modules }} ${{ matrix.comment }} (JDK ${{ matrix.java }}, ${{ matrix.hadoop }}, ${{ matrix.hive }})"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
java: [ '1.8', '11' ]
hadoop: [ 'hadoop-2.7', 'hadoop-3.2' ]
hive: [ 'hive-1.2', 'hive-2.3' ]
exclude:
- java: '11'
hive: 'hive-1.2'
- hadoop: 'hadoop-3.2'
hive: 'hive-1.2'
name: Build Spark - JDK${{ matrix.java }}/${{ matrix.hadoop }}/${{ matrix.hive }}

java:
- 1.8
hadoop:
- hadoop3.2
hive:
- hive2.3
# TODO(SPARK-32246): We don't test 'streaming-kinesis-asl' for now.
# Kinesis tests depends on external Amazon kinesis service.
# Note that the modules below are from sparktestsupport/modules.py.
modules:
- |-
core, unsafe, kvstore, avro,
network-common, network-shuffle, repl, launcher,
examples, sketch, graphx
- |-
catalyst, hive-thriftserver
- |-
streaming, sql-kafka-0-10, streaming-kafka-0-10,
mllib-local, mllib,
yarn, mesos, kubernetes, hadoop-cloud, spark-ganglia-lgpl
- |-
pyspark-sql, pyspark-mllib, pyspark-resource
- |-
pyspark-core, pyspark-streaming, pyspark-ml
- |-
sparkr
# Here, we split Hive and SQL tests into some of slow ones and the rest of them.
included-tags: [""]
excluded-tags: [""]
comment: [""]
include:
# Hive tests
- modules: hive
java: 1.8
hadoop: hadoop3.2
hive: hive2.3
included-tags: org.apache.spark.tags.SlowHiveTest
comment: "- slow tests"
- modules: hive
java: 1.8
hadoop: hadoop3.2
hive: hive2.3
excluded-tags: org.apache.spark.tags.SlowHiveTest
comment: "- other tests"
# SQL tests
- modules: sql
java: 1.8
hadoop: hadoop3.2
hive: hive2.3
included-tags: org.apache.spark.tags.ExtendedSQLTest
comment: "- slow tests"
- modules: sql
java: 1.8
hadoop: hadoop3.2
hive: hive2.3
excluded-tags: org.apache.spark.tags.ExtendedSQLTest
comment: "- other tests"
env:
MODULES_TO_TEST: ${{ matrix.modules }}
EXCLUDED_TAGS: ${{ matrix.excluded-tags }}
INCLUDED_TAGS: ${{ matrix.included-tags }}
HADOOP_PROFILE: ${{ matrix.hadoop }}
HIVE_PROFILE: ${{ matrix.hive }}
# GitHub Actions' default miniconda to use in pip packaging test.
CONDA_PREFIX: /usr/share/miniconda
GITHUB_PREV_SHA: ${{ github.event.before }}
steps:
- uses: actions/checkout@master
# We split caches because GitHub Action Cache has a 400MB-size limit.
- uses: actions/cache@v1
- name: Checkout Spark repository
uses: actions/checkout@v2
# In order to fetch changed files
with:
fetch-depth: 0
# Cache local repositories. Note that GitHub Actions cache has a 2G limit.
- name: Cache Scala, SBT, Maven and Zinc
uses: actions/cache@v1
with:
path: build
key: build-${{ hashFiles('**/pom.xml') }}
restore-keys: |
build-
- uses: actions/cache@v1
- name: Cache Maven local repository
uses: actions/cache@v2
with:
path: ~/.m2/repository/com
key: ${{ matrix.java }}-${{ matrix.hadoop }}-maven-com-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ matrix.java }}-${{ matrix.hadoop }}-maven-com-
- uses: actions/cache@v1
with:
path: ~/.m2/repository/org
key: ${{ matrix.java }}-${{ matrix.hadoop }}-maven-org-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ matrix.java }}-${{ matrix.hadoop }}-maven-org-
- uses: actions/cache@v1
with:
path: ~/.m2/repository/net
key: ${{ matrix.java }}-${{ matrix.hadoop }}-maven-net-${{ hashFiles('**/pom.xml') }}
path: ~/.m2/repository
key: ${{ matrix.java }}-${{ matrix.hadoop }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ matrix.java }}-${{ matrix.hadoop }}-maven-net-
- uses: actions/cache@v1
${{ matrix.java }}-${{ matrix.hadoop }}-maven-
- name: Cache Ivy local repository
uses: actions/cache@v2
with:
path: ~/.m2/repository/io
key: ${{ matrix.java }}-${{ matrix.hadoop }}-maven-io-${{ hashFiles('**/pom.xml') }}
path: ~/.ivy2/cache
key: ${{ matrix.java }}-${{ matrix.hadoop }}-ivy-${{ hashFiles('**/pom.xml') }}-${{ hashFiles('**/plugins.sbt') }}
restore-keys: |
${{ matrix.java }}-${{ matrix.hadoop }}-maven-io-
- name: Set up JDK ${{ matrix.java }}
${{ matrix.java }}-${{ matrix.hadoop }}-ivy-
- name: Install JDK ${{ matrix.java }}
uses: actions/setup-java@v1
with:
java-version: ${{ matrix.java }}
- name: Build with Maven
run: |
export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=1g -Dorg.slf4j.simpleLogger.defaultLogLevel=WARN"
export MAVEN_CLI_OPTS="--no-transfer-progress"
mkdir -p ~/.m2
./build/mvn $MAVEN_CLI_OPTS -DskipTests -Pyarn -Pmesos -Pkubernetes -Phive -P${{ matrix.hive }} -Phive-thriftserver -P${{ matrix.hadoop }} -Phadoop-cloud -Djava.version=${{ matrix.java }} install
rm -rf ~/.m2/repository/org/apache/spark


lint:
runs-on: ubuntu-latest
name: Linters (Java/Scala/Python), licenses, dependencies
steps:
- uses: actions/checkout@master
- uses: actions/setup-java@v1
# PySpark
- name: Install PyPy3
# Note that order of Python installations here matters because default python3 is
# overridden by pypy3.
uses: actions/setup-python@v2
if: contains(matrix.modules, 'pyspark')
with:
java-version: '11'
- uses: actions/setup-python@v1
python-version: pypy3
architecture: x64
- name: Install Python 3.6
uses: actions/setup-python@v2
if: contains(matrix.modules, 'pyspark')
with:
python-version: '3.x'
architecture: 'x64'
- name: Scala
run: ./dev/lint-scala
- name: Java
run: ./dev/lint-java
- name: Python
run: |
pip install flake8 sphinx numpy
./dev/lint-python
- name: License
run: ./dev/check-license
- name: Dependencies
run: ./dev/test-dependencies.sh

lintr:
runs-on: ubuntu-latest
name: Linter (R)
steps:
- uses: actions/checkout@master
- uses: actions/setup-java@v1
python-version: 3.6
architecture: x64
- name: Install Python 3.8
uses: actions/setup-python@v2
# We should install one Python that is higher then 3+ for SQL and Yarn because:
# - SQL component also has Python related tests, for example, IntegratedUDFTestUtils.
# - Yarn has a Python specific test too, for example, YarnClusterSuite.
if: contains(matrix.modules, 'yarn') || contains(matrix.modules, 'pyspark') || (contains(matrix.modules, 'sql') && !contains(matrix.modules, 'sql-'))
with:
java-version: '11'
- uses: r-lib/actions/setup-r@v1
python-version: 3.8
architecture: x64
- name: Install Python packages (Python 3.6 and PyPy3)
if: contains(matrix.modules, 'pyspark')
# PyArrow is not supported in PyPy yet, see ARROW-2651.
# TODO(SPARK-32247): scipy installation with PyPy fails for an unknown reason.
run: |
python3.6 -m pip install numpy pyarrow pandas scipy
python3.6 -m pip list
pypy3 -m pip install numpy pandas
pypy3 -m pip list
- name: Install Python packages (Python 3.8)
if: contains(matrix.modules, 'pyspark') || (contains(matrix.modules, 'sql') && !contains(matrix.modules, 'sql-'))
run: |
python3.8 -m pip install numpy pyarrow pandas scipy
python3.8 -m pip list
# SparkR
- name: Install R 3.6
uses: r-lib/actions/setup-r@v1
if: contains(matrix.modules, 'sparkr')
with:
r-version: '3.6.2'
- name: Install lib
r-version: 3.6
- name: Install R packages
if: contains(matrix.modules, 'sparkr')
run: |
sudo apt-get install -y libcurl4-openssl-dev
- name: install R packages
sudo Rscript -e "install.packages(c('knitr', 'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow', 'roxygen2'), repos='https://cloud.r-project.org/')"
# Show installed packages in R.
sudo Rscript -e 'pkg_list <- as.data.frame(installed.packages()[, c(1,3:4)]); pkg_list[is.na(pkg_list$Priority), 1:2, drop = FALSE]'
# Run the tests.
- name: "Run tests: ${{ matrix.modules }}"
run: |
sudo Rscript -e "install.packages(c('curl', 'xml2', 'httr', 'devtools', 'testthat', 'knitr', 'rmarkdown', 'roxygen2', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
sudo Rscript -e "devtools::install_github('jimhester/[email protected]')"
- name: package and install SparkR
run: ./R/install-dev.sh
- name: lint-r
run: ./dev/lint-r
# Hive tests become flaky when running in parallel as it's too intensive.
if [[ "$MODULES_TO_TEST" == "hive" ]]; then export SERIAL_SBT_TESTS=1; fi
mkdir -p ~/.m2
./dev/run-tests --parallelism 2 --modules "$MODULES_TO_TEST" --included-tags "$INCLUDED_TAGS" --excluded-tags "$EXCLUDED_TAGS"
rm -rf ~/.m2/repository/org/apache/spark

docs:
# Static analysis, and documentation build
lint:
name: Linters, licenses, dependencies and documentation generation
runs-on: ubuntu-latest
name: Generate documents
steps:
- uses: actions/checkout@master
- uses: actions/cache@v1
- name: Checkout Spark repository
uses: actions/checkout@v2
- name: Cache Maven local repository
uses: actions/cache@v2
with:
path: ~/.m2/repository
key: docs-maven-repo-${{ hashFiles('**/pom.xml') }}
restore-keys: |
docs-maven-repo-
- uses: actions/setup-java@v1
docs-maven-
- name: Install JDK 1.8
uses: actions/setup-java@v1
with:
java-version: '1.8'
- uses: actions/setup-python@v1
java-version: 1.8
- name: Install Python 3.6
uses: actions/setup-python@v2
with:
python-version: '3.x'
architecture: 'x64'
- uses: actions/setup-ruby@v1
python-version: 3.6
architecture: x64
- name: Install Python linter dependencies
run: |
pip3 install flake8 sphinx numpy
- name: Install R 3.6
uses: r-lib/actions/setup-r@v1
with:
ruby-version: '2.7'
- uses: r-lib/actions/setup-r@v1
r-version: 3.6
- name: Install R linter dependencies and SparkR
run: |
sudo apt-get install -y libcurl4-openssl-dev
sudo Rscript -e "install.packages(c('devtools'), repos='https://cloud.r-project.org/')"
sudo Rscript -e "devtools::install_github('jimhester/[email protected]')"
./R/install-dev.sh
- name: Install Ruby 2.7 for documentation generation
uses: actions/setup-ruby@v1
with:
r-version: '3.6.2'
- name: Install lib and pandoc
ruby-version: 2.7
- name: Install dependencies for documentation generation
run: |
sudo apt-get install -y libcurl4-openssl-dev pandoc
- name: Install packages
run: |
pip install sphinx mkdocs numpy
gem install jekyll jekyll-redirect-from rouge
sudo Rscript -e "install.packages(c('curl', 'xml2', 'httr', 'devtools', 'testthat', 'knitr', 'rmarkdown', 'roxygen2', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
- name: Run jekyll build
sudo Rscript -e "install.packages(c('devtools', 'testthat', 'knitr', 'rmarkdown', 'roxygen2'), repos='https://cloud.r-project.org/')"
- name: Scala linter
run: ./dev/lint-scala
- name: Java linter
run: ./dev/lint-java
- name: Python linter
run: ./dev/lint-python
- name: R linter
run: ./dev/lint-r
- name: License test
run: ./dev/check-license
- name: Dependencies test
run: ./dev/test-dependencies.sh
- name: Run documentation build
run: |
cd docs
jekyll build
2 changes: 1 addition & 1 deletion R/pkg/tests/fulltests/test_context.R
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ test_that("utility function can be called", {
expect_true(TRUE)
})

test_that("getClientModeSparkSubmitOpts() returns spark-submit args from whitelist", {
test_that("getClientModeSparkSubmitOpts() returns spark-submit args from allowList", {
e <- new.env()
e[["spark.driver.memory"]] <- "512m"
ops <- getClientModeSparkSubmitOpts("sparkrmain", e)
Expand Down
8 changes: 4 additions & 4 deletions R/pkg/tests/fulltests/test_sparkSQL.R
Original file line number Diff line number Diff line change
Expand Up @@ -3921,14 +3921,14 @@ test_that("No extra files are created in SPARK_HOME by starting session and maki
# before creating a SparkSession with enableHiveSupport = T at the top of this test file
# (filesBefore). The test here is to compare that (filesBefore) against the list of files before
# any test is run in run-all.R (sparkRFilesBefore).
# sparkRWhitelistSQLDirs is also defined in run-all.R, and should contain only 2 whitelisted dirs,
# sparkRAllowedSQLDirs is also defined in run-all.R, and should contain only 2 allowed dirs,
# here allow the first value, spark-warehouse, in the diff, everything else should be exactly the
# same as before any test is run.
compare_list(sparkRFilesBefore, setdiff(filesBefore, sparkRWhitelistSQLDirs[[1]]))
compare_list(sparkRFilesBefore, setdiff(filesBefore, sparkRAllowedSQLDirs[[1]]))
# third, ensure only spark-warehouse and metastore_db are created when enableHiveSupport = T
# note: as the note above, after running all tests in this file while enableHiveSupport = T, we
# check the list of files again. This time we allow both whitelisted dirs to be in the diff.
compare_list(sparkRFilesBefore, setdiff(filesAfter, sparkRWhitelistSQLDirs))
# check the list of files again. This time we allow both dirs to be in the diff.
compare_list(sparkRFilesBefore, setdiff(filesAfter, sparkRAllowedSQLDirs))
})

unlink(parquetPath)
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/tests/run-all.R
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ if (identical(Sys.getenv("NOT_CRAN"), "true")) {
install.spark(overwrite = TRUE)

sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
invisible(lapply(sparkRWhitelistSQLDirs,
sparkRAllowedSQLDirs <- c("spark-warehouse", "metastore_db")
invisible(lapply(sparkRAllowedSQLDirs,
function(x) { unlink(file.path(sparkRDir, x), recursive = TRUE, force = TRUE)}))
sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,4 +155,4 @@ server will be able to understand. This will cause the server to close the conne
attacker tries to send any command to the server. The attacker can just hold the channel open for
some time, which will be closed when the server times out the channel. These issues could be
separately mitigated by adding a shorter timeout for the first message after authentication, and
potentially by adding host blacklists if a possible attack is detected from a particular host.
potentially by adding host reject-lists if a possible attack is detected from a particular host.
Loading