Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
73b5da3
[SPARK-36556][SQL] Add DSV2 filters
huaxingao Sep 11, 2021
871b2bd
[SPARK-36760][SQL] Add interface SupportsPushDownV2Filters
huaxingao Sep 22, 2021
8f73de6
[SPARK-37020][SQL] DS V2 LIMIT push down
huaxingao Oct 28, 2021
5dbcdc0
[SPARK-37038][SQL] DSV2 Sample Push Down
huaxingao Nov 4, 2021
39b29d7
[SPARK-37286][SQL] Move compileAggregates from JDBCRDD to JdbcDialect
beliefer Dec 3, 2021
560cabd
[SPARK-37286][DOCS][FOLLOWUP] Fix the wrong parameter name for Javadoc
sarutak Dec 3, 2021
a792696
[SPARK-37262][SQL] Don't log empty aggregate and group by in JDBCScan
huaxingao Nov 11, 2021
9eae482
[SPARK-37483][SQL] Support push down top N to JDBC data source V2
beliefer Dec 16, 2021
abf7662
[SPARK-37644][SQL] Support datasource v2 complete aggregate pushdown
beliefer Dec 23, 2021
aad72ad
[SPARK-37627][SQL] Add sorted column in BucketTransform
huaxingao Dec 13, 2021
c6d90e8
[SPARK-37789][SQL] Add a class to represent general aggregate functio…
cloud-fan Jan 4, 2022
576b1fb
[SPARK-37644][SQL][FOLLOWUP] When partition column is same as group b…
beliefer Jan 5, 2022
cc970f1
[SPARK-37527][SQL] Translate more standard aggregate functions for pu…
beliefer Jan 6, 2022
470371c
[SPARK-37734][SQL][TESTS] Upgrade h2 from 1.4.195 to 2.0.204
beliefer Jan 7, 2022
6aeb2a5
[SPARK-37527][SQL] Compile `COVAR_POP`, `COVAR_SAMP` and `CORR` in `H…
beliefer Jan 10, 2022
b5dc371
[SPARK-37839][SQL] DS V2 supports partial aggregate push-down `AVG`
beliefer Jan 20, 2022
fd06d44
[SPARK-36526][SQL] DSV2 Index Support: Add supportsIndex interface
huaxingao Sep 29, 2021
52b36b0
[SPARK-36913][SQL] Implement createIndex and IndexExists in DS V2 JDB…
huaxingao Oct 8, 2021
ce63110
[SPARK-36914][SQL] Implement dropIndex and listIndexes in JDBC (MySQL…
huaxingao Oct 12, 2021
ac8fd9c
[SPARK-37343][SQL] Implement createIndex, IndexExists and dropIndex i…
dchvn Dec 17, 2021
3cac7e6
[SPARK-37867][SQL] Compile aggregate functions of build-in JDBC dialect
beliefer Jan 25, 2022
b5111bc
[SPARK-37929][SQL][FOLLOWUP] Support cascade mode for JDBC V2
beliefer Jan 26, 2022
229db0e
[SPARK-38035][SQL] Add docker tests for build-in JDBC dialect
beliefer Jan 28, 2022
2227b13
[SPARK-38054][SQL] Supports list namespaces in JDBC v2 MySQL dialect
beliefer Feb 10, 2022
b0e5d0e
[SPARK-36351][SQL] Refactor filter push down in file source v2
huaxingao Sep 3, 2021
ff1a457
[SPARK-36645][SQL] Aggregate (Min/Max/Count) push down for Parquet
huaxingao Oct 11, 2021
762af83
[SPARK-34960][SQL] Aggregate push down for ORC
c21 Oct 29, 2021
4c2380b
[SPARK-37960][SQL] A new framework to represent catalyst expressions …
beliefer Feb 10, 2022
f9b54fb
[SPARK-37867][SQL][FOLLOWUP] Compile aggregate functions for build-in…
beliefer Feb 18, 2022
16cb319
[SPARK-36568][SQL] Better FileScan statistics estimation
peter-toth Aug 26, 2021
659f15f
[SPARK-37929][SQL] Support cascade mode for `dropNamespace` API
dchvn Jan 21, 2022
7e5c9ba
code format
chenzhx Feb 22, 2022
04fef08
[SPARK-38196][SQL] Refactor framework so as JDBC dialect could compil…
beliefer Mar 4, 2022
ed0d635
[SPARK-38361][SQL] Add factory method `getConnection` into `JDBCDialect`
beliefer Mar 8, 2022
ea79285
code format
chenzhx Mar 9, 2022
e16fe8a
[SPARK-38560][SQL] If `Sum`, `Count`, `Any` accompany with distinct, …
beliefer Mar 17, 2022
0fce03d
[SPARK-36718][SQL] Only collapse projects if we don't duplicate expen…
cloud-fan Sep 17, 2021
f113950
[SPARK-38432][SQL] Refactor framework so as JDBC dialect could compil…
beliefer Mar 22, 2022
776a2b5
[SPARK-38432][SQL][FOLLOWUP] Supplement test case for overflow and ad…
beliefer Mar 23, 2022
61a6d34
[SPARK-38533][SQL] DS V2 aggregate push-down supports project with alias
beliefer Mar 23, 2022
219eb4f
code foramt
chenzhx Mar 23, 2022
cab0266
[SPARK-37483][SQL][FOLLOWUP] Rename `pushedTopN` to `PushedTopN` and …
beliefer Mar 23, 2022
9f3194c
[SPARK-38644][SQL] DS V2 topN push-down supports project with alias
beliefer Mar 25, 2022
dbb8c2d
[SPARK-38391][SQL] Datasource v2 supports partial topN push-down
beliefer Mar 28, 2022
b67333d
[SPARK-38633][SQL] Support push down Cast to JDBC data source V2
beliefer Mar 29, 2022
e6cfc55
[SPARK-38432][SQL][FOLLOWUP] Add test case for push down filter with …
beliefer Mar 28, 2022
614cb93
[SPARK-38633][SQL][FOLLOWUP] JDBCSQLBuilder should build cast to type…
beliefer Mar 29, 2022
3034070
[SPARK-37839][SQL][FOLLOWUP] Check overflow when DS V2 partial aggreg…
beliefer Mar 31, 2022
93690a0
[SPARK-37960][SQL][FOLLOWUP] Make the testing CASE WHEN query more re…
beliefer Apr 1, 2022
a730da9
[SPARK-38761][SQL] DS V2 supports push down misc non-aggregate functions
beliefer Apr 11, 2022
9560785
[SPARK-38865][SQL][DOCS] Update document of JDBC options for `pushDow…
beliefer Apr 13, 2022
8c5860d
[SPARK-38855][SQL] DS V2 supports push down math functions
beliefer Apr 13, 2022
2461011
update spark version to r61
chenzhx Apr 14, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion assembly/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/kvstore/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/network-common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/network-shuffle/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/network-yarn/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/sketch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/tags/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion common/unsafe/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion core/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
28 changes: 23 additions & 5 deletions docs/sql-data-sources-jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ license: |
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
Expand Down Expand Up @@ -191,7 +191,7 @@ logging into the data sources.
<td>write</td>
</td>
</tr>

<tr>
<td><code>cascadeTruncate</code></td>
<td>the default cascading truncate behaviour of the JDBC database in question, specified in the <code>isCascadeTruncate</code> in each JDBCDialect</td>
Expand Down Expand Up @@ -241,7 +241,25 @@ logging into the data sources.
<td><code>pushDownAggregate</code></td>
<td><code>false</code></td>
<td>
The option to enable or disable aggregate push-down into the JDBC data source. The default value is false, in which case Spark will not push down aggregates to the JDBC data source. Otherwise, if sets to true, aggregates will be pushed down to the JDBC data source. Aggregate push-down is usually turned off when the aggregate is performed faster by Spark than by the JDBC data source. Please note that aggregates can be pushed down if and only if all the aggregate functions and the related filters can be pushed down. Spark assumes that the data source can't fully complete the aggregate and does a final aggregate over the data source output.
The option to enable or disable aggregate push-down in V2 JDBC data source. The default value is false, in which case Spark will not push down aggregates to the JDBC data source. Otherwise, if sets to true, aggregates will be pushed down to the JDBC data source. Aggregate push-down is usually turned off when the aggregate is performed faster by Spark than by the JDBC data source. Please note that aggregates can be pushed down if and only if all the aggregate functions and the related filters can be pushed down. If <code>numPartitions</code> equals to 1 or the group by key is the same as <code>partitionColumn</code>, Spark will push down aggregate to data source completely and not apply a final aggregate over the data source output. Otherwise, Spark will apply a final aggregate over the data source output.
</td>
<td>read</td>
</tr>

<tr>
<td><code>pushDownLimit</code></td>
<td><code>false</code></td>
<td>
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, SPARK still applies LIMIT or LIMIT with SORT on the result from data source even if LIMIT or LIMIT with SORT is pushed down. Otherwise, if LIMIT or LIMIT with SORT is pushed down and <code>numPartitions</code> equals to 1, SPARK will not apply LIMIT or LIMIT with SORT on the result from data source.
</td>
<td>read</td>
</tr>

<tr>
<td><code>pushDownTableSample</code></td>
<td><code>false</code></td>
<td>
The option to enable or disable TABLESAMPLE push-down into V2 JDBC data source. The default value is false, in which case Spark does not push down TABLESAMPLE to the JDBC data source. Otherwise, if value sets to true, TABLESAMPLE is pushed down to the JDBC data source.
</td>
<td>read</td>
</tr>
Expand Down Expand Up @@ -288,7 +306,7 @@ logging into the data sources.

Note that kerberos authentication with keytab is not always supported by the JDBC driver.<br>
Before using <code>keytab</code> and <code>principal</code> configuration options, please make sure the following requirements are met:
* The included JDBC driver version supports kerberos authentication with keytab.
* The included JDBC driver version supports kerberos authentication with keytab.
* There is a built-in connection provider which supports the used database.

There is a built-in connection providers for the following databases:
Expand Down
2 changes: 1 addition & 1 deletion examples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
2 changes: 1 addition & 1 deletion external/avro/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,6 @@ case class AvroScan(
pushedFilters)
}

override def withFilters(
partitionFilters: Seq[Expression], dataFilters: Seq[Expression]): FileScan =
this.copy(partitionFilters = partitionFilters, dataFilters = dataFilters)

override def equals(obj: Any): Boolean = obj match {
case a: AvroScan => super.equals(a) && dataSchema == a.dataSchema && options == a.options &&
equivalentFilters(pushedFilters, a.pushedFilters)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ package org.apache.spark.sql.v2.avro

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.catalyst.StructFilters
import org.apache.spark.sql.connector.read.{Scan, SupportsPushDownFilters}
import org.apache.spark.sql.connector.read.Scan
import org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex
import org.apache.spark.sql.execution.datasources.v2.FileScanBuilder
import org.apache.spark.sql.sources.Filter
Expand All @@ -31,7 +31,7 @@ class AvroScanBuilder (
schema: StructType,
dataSchema: StructType,
options: CaseInsensitiveStringMap)
extends FileScanBuilder(sparkSession, fileIndex, dataSchema) with SupportsPushDownFilters {
extends FileScanBuilder(sparkSession, fileIndex, dataSchema) {

override def build(): Scan = {
AvroScan(
Expand All @@ -41,17 +41,16 @@ class AvroScanBuilder (
readDataSchema(),
readPartitionSchema(),
options,
pushedFilters())
pushedDataFilters,
partitionFilters,
dataFilters)
}

private var _pushedFilters: Array[Filter] = Array.empty

override def pushFilters(filters: Array[Filter]): Array[Filter] = {
override def pushDataFilters(dataFilters: Array[Filter]): Array[Filter] = {
if (sparkSession.sessionState.conf.avroFilterPushDown) {
_pushedFilters = StructFilters.pushedFilters(filters, dataSchema)
StructFilters.pushedFilters(dataFilters, dataSchema)
} else {
Array.empty[Filter]
}
filters
}

override def pushedFilters(): Array[Filter] = _pushedFilters
}
7 changes: 6 additions & 1 deletion external/docker-integration-tests/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
<parent>
<groupId>org.apache.spark</groupId>
<artifactId>spark-parent_2.12</artifactId>
<version>3.2.0-kylin-4.x-r60</version>
<version>3.2.0-kylin-4.x-r61</version>
<relativePath>../../pom.xml</relativePath>
</parent>

Expand Down Expand Up @@ -162,5 +162,10 @@
<artifactId>mssql-jdbc</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,14 @@
package org.apache.spark.sql.jdbc.v2

import java.sql.Connection
import java.util.Locale

import org.scalatest.time.SpanSugar._

import org.apache.spark.SparkConf
import org.apache.spark.sql.AnalysisException
import org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog
import org.apache.spark.sql.jdbc.{DatabaseOnDocker, DockerJDBCIntegrationSuite}
import org.apache.spark.sql.jdbc.DatabaseOnDocker
import org.apache.spark.sql.types._
import org.apache.spark.tags.DockerTest

Expand All @@ -36,8 +37,9 @@ import org.apache.spark.tags.DockerTest
* }}}
*/
@DockerTest
class DB2IntegrationSuite extends DockerJDBCIntegrationSuite with V2JDBCTest {
class DB2IntegrationSuite extends DockerJDBCIntegrationV2Suite with V2JDBCTest {
override val catalogName: String = "db2"
override val namespaceOpt: Option[String] = Some("DB2INST1")
override val db = new DatabaseOnDocker {
override val imageName = sys.env.getOrElse("DB2_DOCKER_IMAGE_NAME", "ibmcom/db2:11.5.4.0")
override val env = Map(
Expand All @@ -59,8 +61,13 @@ class DB2IntegrationSuite extends DockerJDBCIntegrationSuite with V2JDBCTest {
override def sparkConf: SparkConf = super.sparkConf
.set("spark.sql.catalog.db2", classOf[JDBCTableCatalog].getName)
.set("spark.sql.catalog.db2.url", db.getJdbcUrl(dockerIp, externalPort))
.set("spark.sql.catalog.db2.pushDownAggregate", "true")

override def dataPreparation(conn: Connection): Unit = {}
override def tablePreparation(connection: Connection): Unit = {
connection.prepareStatement(
"CREATE TABLE employee (dept INTEGER, name VARCHAR(10), salary DECIMAL(20, 2), bonus DOUBLE)")
.executeUpdate()
}

override def testUpdateColumnType(tbl: String): Unit = {
sql(s"CREATE TABLE $tbl (ID INTEGER)")
Expand All @@ -86,4 +93,17 @@ class DB2IntegrationSuite extends DockerJDBCIntegrationSuite with V2JDBCTest {
val expectedSchema = new StructType().add("ID", IntegerType, true, defaultMetadata)
assert(t.schema === expectedSchema)
}

override def caseConvert(tableName: String): String = tableName.toUpperCase(Locale.ROOT)

testVarPop()
testVarPop(true)
testVarSamp()
testVarSamp(true)
testStddevPop()
testStddevPop(true)
testStddevSamp()
testStddevSamp(true)
testCovarPop()
testCovarSamp()
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.sql.jdbc.v2

import java.sql.Connection

import scala.collection.JavaConverters._

import org.apache.spark.sql.jdbc.{DatabaseOnDocker, DockerJDBCIntegrationSuite}
import org.apache.spark.sql.util.CaseInsensitiveStringMap
import org.apache.spark.tags.DockerTest

/**
* To run this test suite for a specific version (e.g., ibmcom/db2:11.5.6.0a):
* {{{
* ENABLE_DOCKER_INTEGRATION_TESTS=1 DB2_DOCKER_IMAGE_NAME=ibmcom/db2:11.5.6.0a
* ./build/sbt -Pdocker-integration-tests "testOnly *v2.DB2NamespaceSuite"
* }}}
*/
@DockerTest
class DB2NamespaceSuite extends DockerJDBCIntegrationSuite with V2JDBCNamespaceTest {
override val db = new DatabaseOnDocker {
override val imageName = sys.env.getOrElse("DB2_DOCKER_IMAGE_NAME", "ibmcom/db2:11.5.6.0a")
override val env = Map(
"DB2INST1_PASSWORD" -> "rootpass",
"LICENSE" -> "accept",
"DBNAME" -> "db2foo",
"ARCHIVE_LOGS" -> "false",
"AUTOCONFIG" -> "false"
)
override val usesIpc = false
override val jdbcPort: Int = 50000
override val privileged = true
override def getJdbcUrl(ip: String, port: Int): String =
s"jdbc:db2://$ip:$port/db2foo:user=db2inst1;password=rootpass;retrieveMessagesFromServerOnGetMessage=true;" //scalastyle:ignore
}

val map = new CaseInsensitiveStringMap(
Map("url" -> db.getJdbcUrl(dockerIp, externalPort),
"driver" -> "com.ibm.db2.jcc.DB2Driver").asJava)

catalog.initialize("db2", map)

override def dataPreparation(conn: Connection): Unit = {}

override def builtinNamespaces: Array[Array[String]] =
Array(Array("NULLID"), Array("SQLJ"), Array("SYSCAT"), Array("SYSFUN"),
Array("SYSIBM"), Array("SYSIBMADM"), Array("SYSIBMINTERNAL"), Array("SYSIBMTS"),
Array("SYSPROC"), Array("SYSPUBLIC"), Array("SYSSTAT"), Array("SYSTOOLS"))

override def listNamespaces(namespace: Array[String]): Array[Array[String]] = {
builtinNamespaces ++ Array(namespace)
}

override val supportsDropSchemaCascade: Boolean = false

testListNamespaces()
testDropNamespaces()
}
Loading