Skip to content

Conversation

@stoty
Copy link
Contributor

@stoty stoty commented May 6, 2025

No description provided.

@stoty stoty requested review from NihalJain, meszibalu and ndimiduk May 6, 2025 14:28
@Apache-HBase
Copy link

🎊 +1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 15s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-0 ⚠️ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ master Compile Tests _
+0 🆗 mvndep 0m 26s Maven dependency ordering for branch
+1 💚 mvninstall 1m 29s master passed
+1 💚 compile 0m 49s master passed
+1 💚 spotless 0m 14s branch has no errors when running spotless:check.
+1 💚 javadoc 0m 50s master passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 7s Maven dependency ordering for patch
+1 💚 mvninstall 0m 48s the patch passed
+1 💚 compile 0m 51s the patch passed
+1 💚 javac 0m 51s the patch passed
+1 💚 whitespace 0m 0s The patch has no whitespace issues.
+1 💚 xml 0m 2s The patch has no ill-formed XML file.
+1 💚 spotless 0m 11s patch has no errors when running spotless:check.
+1 💚 javadoc 0m 49s the patch passed
_ Other Tests _
+1 💚 unit 8m 15s root in the patch passed.
17m 1s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-Connectors-PreCommit/job/PR-143/1/artifact/yetus-precommit-check/output/Dockerfile
GITHUB PR #143
Optional Tests dupname javac javadoc unit spotless xml compile
uname Linux 773bcd959999 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 GNU/Linux
Build tool hb_maven
Personality dev-support/jenkins/hbase-personality.sh
git revision master / af4230c
Default Java Oracle Corporation-1.8.0_282-b08
Test Results https://ci-hbase.apache.org/job/HBase-Connectors-PreCommit/job/PR-143/1/testReport/
Max. process+thread count 945 (vs. ulimit of 12500)
modules C: spark spark/hbase-spark-protocol . U: .
Console output https://ci-hbase.apache.org/job/HBase-Connectors-PreCommit/job/PR-143/1/console
versions git=2.20.1
Powered by Apache Yetus 0.12.0 https://yetus.apache.org

This message was automatically generated.

@NihalJain
Copy link
Contributor

Hi @stoty what is the error you are seeing. I remember building hbase-connectors with hbase 2.6.0 and also fixed 8c9de32 for that. Maybe something we changed post 2.6.0 release?

@stoty
Copy link
Contributor Author

stoty commented May 7, 2025

Hi @stoty what is the error you are seeing. I remember building hbase-connectors with hbase 2.6.0 and also fixed 8c9de32 for that. Maybe something we changed post 2.6.0 release?

Driver stacktrace:
25/05/07 10:23:35 INFO DAGScheduler: Job 17 failed: show at PartitionFilterSuite.scala:480, took 0.030877 s

  • or *** FAILED ***
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 18.0 failed 1 times, most recent failure: Lost task 0.0 in stage 18.0 (TID 27) (172.30.65.179 executor driver): java.lang.NoSuchMethodError: 'void org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.makeExtensionsImmutable()'
    at org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.(SparkFilterProtos.java:894)
    at org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.(SparkFilterProtos.java:805)
    at org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter$1.parsePartialFrom(SparkFilterProtos.java:915)
    at org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter$1.parsePartialFrom(SparkFilterProtos.java:910)
    at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:135)
    at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:168)
    at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:180)
    at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:185)
    at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:25)
    at org.apache.hadoop.hbase.spark.protobuf.generated.SparkFilterProtos$SQLPredicatePushDownFilter.parseFrom(SparkFilterProtos.java:1224)
    at org.apache.hadoop.hbase.spark.SparkSQLPushDownFilter.parseFrom(SparkSQLPushDownFilter.java:172)
    at org.apache.hadoop.hbase.spark.datasources.SerializedFilter$.$anonfun$fromSerializedFilter$1(HBaseTableScanRDD.scala:309)
    at scala.Option.map(Option.scala:230)
    at org.apache.hadoop.hbase.spark.datasources.SerializedFilter$.fromSerializedFilter(HBaseTableScanRDD.scala:309)
    at org.apache.hadoop.hbase.spark.datasources.HBaseTableScanRDD.compute(HBaseTableScanRDD.scala:237)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:136)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)

@stoty
Copy link
Contributor Author

stoty commented May 7, 2025

You can use #144 to repro the scalatest errors, @NihalJain .

@NihalJain
Copy link
Contributor

NihalJain commented May 7, 2025

You can use #144 to repro the scalatest errors, @NihalJain .

Thanks Istvan let me try that.

@stoty stoty merged commit b850f82 into apache:master May 7, 2025
1 check passed
@stoty
Copy link
Contributor Author

stoty commented May 7, 2025

Make sure to apply that without this patch (or use the correct protobuf version for testing that this fixes the error)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants