Skip to content

Commit 54c1fae

Browse files
committed
[BUILD][MINOR] Fix java style check issues
## What changes were proposed in this pull request? This patch fixes a few recently introduced java style check errors in master and release branch. As an aside, given that [java linting currently fails](#10763 ) on machines with a clean maven cache, it'd be great to find another workaround to [re-enable the java style checks](https://github.com/apache/spark/blob/3a07eff5af601511e97a05e6fea0e3d48f74c4f0/dev/run-tests.py#L577) as part of Spark PRB. /cc zsxwing JoshRosen srowen for any suggestions ## How was this patch tested? Manual Check Author: Sameer Agarwal <[email protected]> Closes #20323 from sameeragarwal/java. (cherry picked from commit 9c4b998) Signed-off-by: Sameer Agarwal <[email protected]>
1 parent 541dbc0 commit 54c1fae

File tree

3 files changed

+9
-5
lines changed

3 files changed

+9
-5
lines changed

sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,10 @@
2828
/**
2929
* A data source writer that is returned by
3030
* {@link WriteSupport#createWriter(String, StructType, SaveMode, DataSourceV2Options)}/
31-
* {@link org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(String, long, StructType, OutputMode, DataSourceV2Options)}/
32-
* {@link org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(String, StructType, OutputMode, DataSourceV2Options)}.
31+
* {@link org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(
32+
* String, long, StructType, OutputMode, DataSourceV2Options)}/
33+
* {@link org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(
34+
* String, StructType, OutputMode, DataSourceV2Options)}.
3335
* It can mix in various writing optimization interfaces to speed up the data saving. The actual
3436
* writing logic is delegated to {@link DataWriter}.
3537
*

sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -556,8 +556,9 @@ final int getArrayOffset(int rowId) {
556556
/**
557557
* Any call to "get" method will throw UnsupportedOperationException.
558558
*
559-
* Access struct values in a ArrowColumnVector doesn't use this accessor. Instead, it uses getStruct() method defined
560-
* in the parent class. Any call to "get" method in this class is a bug in the code.
559+
* Access struct values in a ArrowColumnVector doesn't use this accessor. Instead, it uses
560+
* getStruct() method defined in the parent class. Any call to "get" method in this class is a
561+
* bug in the code.
561562
*
562563
*/
563564
private static class StructAccessor extends ArrowVectorAccessor {

sql/core/src/test/java/test/org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,8 @@ public DataReader<ColumnarBatch> createDataReader() {
6969
ColumnVector[] vectors = new ColumnVector[2];
7070
vectors[0] = i;
7171
vectors[1] = j;
72-
this.batch = new ColumnarBatch(new StructType().add("i", "int").add("j", "int"), vectors, BATCH_SIZE);
72+
this.batch =
73+
new ColumnarBatch(new StructType().add("i", "int").add("j", "int"), vectors, BATCH_SIZE);
7374
return this;
7475
}
7576

0 commit comments

Comments
 (0)