Skip to content

spark-4.0/4.0.0-r0: fix GHSA-c476-j253-5rgq

3a5abc2
Select commit
Loading
Failed to load commit list.
Closed

spark-4.0/4.0.0-r0: cve remediation #57504

spark-4.0/4.0.0-r0: fix GHSA-c476-j253-5rgq
3a5abc2
Select commit
Loading
Failed to load commit list.
Chainguard Internal / elastic-build (eco-2-28) succeeded Jun 25, 2025 in 36m 59s

APKs built successfully

Build ID: 1a8acc0c-4edb-47bf-9247-41a0dea0cad1

Details

x86_64 Logs

Click to expand
O TaskSetManager: Finished task 0.0 in stage 3.0 (TID 2) in 59 ms on e0f75306bb20 (executor driver) (1/1)
25/06/25 09:45:52 INFO TaskSchedulerImpl: Removed TaskSet 3.0 whose tasks have all completed, from pool 
25/06/25 09:45:52 INFO DAGScheduler: ResultStage 3 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 67 ms
25/06/25 09:45:52 INFO DAGScheduler: Job 2 is finished. Cancelling potential speculative or zombie tasks for this job
25/06/25 09:45:52 INFO TaskSchedulerImpl: Canceling stage 3
25/06/25 09:45:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 3: Stage finished
25/06/25 09:45:52 INFO DAGScheduler: Job 2 finished: $anonfun$withThreadLocalCaptured$2 at <unknown>:0, took 75.148562 ms
25/06/25 09:45:52 INFO CodeGenerator: Code generated in 5.626935 ms
+---+--------------+
|age|average_salary|
+---+--------------+
| 40|       80000.0|
| 35|       67500.0|
| 30|       60000.0|
+---+--------------+

25/06/25 09:45:52 INFO FileSourceStrategy: Pushed Filters: 
25/06/25 09:45:52 INFO FileSourceStrategy: Post-Scan Filters: Set()
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 214.0 KiB, free 433.3 MiB)
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 38.5 KiB, free 433.3 MiB)
25/06/25 09:45:52 INFO SparkContext: Created broadcast 6 from $anonfun$withThreadLocalCaptured$2 at <unknown>:0
25/06/25 09:45:52 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
25/06/25 09:45:52 INFO DAGScheduler: Registering RDD 20 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) as input to shuffle 1
25/06/25 09:45:52 INFO DAGScheduler: Got map stage job 3 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) with 1 output partitions
25/06/25 09:45:52 INFO DAGScheduler: Final stage: ShuffleMapStage 4 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0)
25/06/25 09:45:52 INFO DAGScheduler: Parents of final stage: List()
25/06/25 09:45:52 INFO DAGScheduler: Missing parents: List()
25/06/25 09:45:52 INFO DAGScheduler: Submitting ShuffleMapStage 4 (MapPartitionsRDD[20] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0), which has no missing parents
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 43.1 KiB, free 433.2 MiB)
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 19.4 KiB, free 433.2 MiB)
25/06/25 09:45:52 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1676
25/06/25 09:45:52 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 4 (MapPartitionsRDD[20] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0) (first 15 tasks are for partitions Vector(0))
25/06/25 09:45:52 INFO TaskSchedulerImpl: Adding task set 4.0 with 1 tasks resource profile 0
25/06/25 09:45:52 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 3) (e0f75306bb20,executor driver, partition 0, PROCESS_LOCAL, 10214 bytes) 
25/06/25 09:45:52 INFO Executor: Running task 0.0 in stage 4.0 (TID 3)
25/06/25 09:45:52 INFO FileScanRDD: Reading File path: file:///home/build/salaries.csv, range: 0-88, partition values: [empty row]
25/06/25 09:45:52 INFO Executor: Finished task 0.0 in stage 4.0 (TID 3). 2737 bytes result sent to driver
25/06/25 09:45:52 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 3) in 24 ms on e0f75306bb20 (executor driver) (1/1)
25/06/25 09:45:52 INFO TaskSchedulerImpl: Removed TaskSet 4.0 whose tasks have all completed, from pool 
25/06/25 09:45:52 INFO DAGScheduler: ShuffleMapStage 4 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 30 ms
25/06/25 09:45:52 INFO DAGScheduler: looking for newly runnable stages
25/06/25 09:45:52 INFO DAGScheduler: running: HashSet()
25/06/25 09:45:52 INFO DAGScheduler: waiting: HashSet()
25/06/25 09:45:52 INFO DAGScheduler: failed: HashSet()
25/06/25 09:45:52 INFO ShufflePartitionsUtil: For shuffle(1, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/06/25 09:45:52 INFO PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory
25/06/25 09:45:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
25/06/25 09:45:52 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
25/06/25 09:45:52 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
25/06/25 09:45:52 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
25/06/25 09:45:52 INFO CodeGenerator: Code generated in 12.106696 ms
25/06/25 09:45:52 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$2 at <unknown>:0
25/06/25 09:45:52 INFO DAGScheduler: Got job 4 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) with 1 output partitions
25/06/25 09:45:52 INFO DAGScheduler: Final stage: ResultStage 6 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0)
25/06/25 09:45:52 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 5)
25/06/25 09:45:52 INFO DAGScheduler: Missing parents: List()
25/06/25 09:45:52 INFO DAGScheduler: Submitting ResultStage 6 (MapPartitionsRDD[23] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0), which has no missing parents
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 268.1 KiB, free 432.9 MiB)
25/06/25 09:45:52 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 98.5 KiB, free 432.8 MiB)
25/06/25 09:45:52 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1676
25/06/25 09:45:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 6 (MapPartitionsRDD[23] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0) (first 15 tasks are for partitions Vector(0))
25/06/25 09:45:52 INFO TaskSchedulerImpl: Adding task set 6.0 with 1 tasks resource profile 0
25/06/25 09:45:52 INFO TaskSetManager: Starting task 0.0 in stage 6.0 (TID 4) (e0f75306bb20,executor driver, partition 0, NODE_LOCAL, 9631 bytes) 
25/06/25 09:45:52 INFO Executor: Running task 0.0 in stage 6.0 (TID 4)
25/06/25 09:45:53 INFO ShuffleBlockFetcherIterator: Getting 1 (216.0 B) non-empty blocks including 1 (216.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
25/06/25 09:45:53 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
25/06/25 09:45:53 INFO CodeGenerator: Code generated in 10.843528 ms
25/06/25 09:45:53 INFO PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory
25/06/25 09:45:53 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
25/06/25 09:45:53 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
25/06/25 09:45:53 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
25/06/25 09:45:53 INFO FileOutputCommitter: Saved output of task 'attempt_202506250945526997694407022260226_0006_m_000000_4' to file:/home/build/data/output.csv/_temporary/0/task_202506250945526997694407022260226_0006_m_000000
25/06/25 09:45:53 INFO SparkHadoopMapRedUtil: attempt_202506250945526997694407022260226_0006_m_000000_4: Committed. Elapsed time: 0 ms.
25/06/25 09:45:53 INFO Executor: Finished task 0.0 in stage 6.0 (TID 4). 6113 bytes result sent to driver
25/06/25 09:45:53 INFO TaskSetManager: Finished task 0.0 in stage 6.0 (TID 4) in 86 ms on e0f75306bb20 (executor driver) (1/1)
25/06/25 09:45:53 INFO TaskSchedulerImpl: Removed TaskSet 6.0 whose tasks have all completed, from pool 
25/06/25 09:45:53 INFO DAGScheduler: ResultStage 6 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 113 ms
25/06/25 09:45:53 INFO DAGScheduler: Job 4 is finished. Cancelling potential speculative or zombie tasks for this job
25/06/25 09:45:53 INFO TaskSchedulerImpl: Canceling stage 6
25/06/25 09:45:53 INFO TaskSchedulerImpl: Killing all running tasks in stage 6: Stage finished
25/06/25 09:45:53 INFO DAGScheduler: Job 4 finished: $anonfun$withThreadLocalCaptured$2 at <unknown>:0, took 117.806009 ms
25/06/25 09:45:53 INFO FileFormatWriter: Start to commit write Job afa3d288-c9e0-4f12-9f3a-221e1e6dba15.
25/06/25 09:45:53 INFO FileFormatWriter: Write Job afa3d288-c9e0-4f12-9f3a-221e1e6dba15 committed. Elapsed time: 12 ms.
25/06/25 09:45:53 INFO FileFormatWriter: Finished processing stats for write job afa3d288-c9e0-4f12-9f3a-221e1e6dba15.
25/06/25 09:45:53 INFO SparkContext: SparkContext is stopping with exitCode 0 from stop at <unknown>:0.
25/06/25 09:45:53 INFO SparkUI: Stopped Spark web UI at http://e0f75306bb20:4040
25/06/25 09:45:53 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
25/06/25 09:45:53 INFO MemoryStore: MemoryStore cleared
25/06/25 09:45:53 INFO BlockManager: BlockManager stopped
25/06/25 09:45:53 INFO BlockManagerMaster: BlockManagerMaster stopped
25/06/25 09:45:53 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
25/06/25 09:45:53 INFO SparkContext: Successfully stopped SparkContext
25/06/25 09:45:53 INFO ShutdownHookManager: Shutdown hook called
25/06/25 09:45:53 INFO ShutdownHookManager: Deleting directory /tmp/spark-bc9c0761-ecd5-4a19-8f50-d76c0f6343df/pyspark-f9b8058a-9fb2-40de-b2c9-35412b8c0418
25/06/25 09:45:53 INFO ShutdownHookManager: Deleting directory /tmp/spark-05273dca-ea4b-442a-a1e8-8f8890ba9b6e
25/06/25 09:45:53 INFO ShutdownHookManager: Deleting directory /home/build/artifacts/spark-b6f7513d-7c46-4cae-8211-edfe3e332f51
25/06/25 09:45:53 INFO ShutdownHookManager: Deleting directory /tmp/spark-bc9c0761-ecd5-4a19-8f50-d76c0f6343df
pod e0f75306bb2058edfec8bc53b6149e271667d60d259ecff9293cf541f1d4978e terminated
command "melange" completed successfully
tests completed successfully

aarch64 Logs

Click to expand
List()
25/06/25 09:46:03 INFO DAGScheduler: Missing parents: List()
25/06/25 09:46:03 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[13] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0), which has no missing parents
25/06/25 09:46:03 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 42.5 KiB, free 433.9 MiB)
25/06/25 09:46:03 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 19.2 KiB, free 433.8 MiB)
25/06/25 09:46:03 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1676
25/06/25 09:46:03 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[13] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0) (first 15 tasks are for partitions Vector(0))
25/06/25 09:46:03 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks resource profile 0
25/06/25 09:46:03 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1) (699770225a2d,executor driver, partition 0, PROCESS_LOCAL, 10214 bytes) 
25/06/25 09:46:03 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 90.221029 ms
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 16.220118 ms
25/06/25 09:46:04 INFO SecurityManager: Changing view acls to: root
25/06/25 09:46:04 INFO SecurityManager: Changing modify acls to: root
25/06/25 09:46:04 INFO SecurityManager: Changing view acls groups to: root
25/06/25 09:46:04 INFO SecurityManager: Changing modify acls groups to: root
25/06/25 09:46:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: root groups with view permissions: EMPTY; users with modify permissions: root; groups with modify permissions: EMPTY; RPC SSL disabled
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 9.551399 ms
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 8.461159 ms
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 7.207319 ms
25/06/25 09:46:04 INFO FileScanRDD: Reading File path: file:///home/build/salaries.csv, range: 0-88, partition values: [empty row]
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 8.45524 ms
25/06/25 09:46:04 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 2780 bytes result sent to driver
25/06/25 09:46:04 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 325 ms on 699770225a2d (executor driver) (1/1)
25/06/25 09:46:04 INFO TaskSchedulerImpl: Removed TaskSet 1.0 whose tasks have all completed, from pool 
25/06/25 09:46:04 INFO DAGScheduler: ShuffleMapStage 1 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 349 ms
25/06/25 09:46:04 INFO DAGScheduler: looking for newly runnable stages
25/06/25 09:46:04 INFO DAGScheduler: running: HashSet()
25/06/25 09:46:04 INFO DAGScheduler: waiting: HashSet()
25/06/25 09:46:04 INFO DAGScheduler: failed: HashSet()
25/06/25 09:46:04 INFO ShufflePartitionsUtil: For shuffle(0, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/06/25 09:46:04 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 25.408397 ms
25/06/25 09:46:04 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$2 at <unknown>:0
25/06/25 09:46:04 INFO DAGScheduler: Got job 2 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) with 1 output partitions
25/06/25 09:46:04 INFO DAGScheduler: Final stage: ResultStage 3 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0)
25/06/25 09:46:04 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 2)
25/06/25 09:46:04 INFO DAGScheduler: Missing parents: List()
25/06/25 09:46:04 INFO DAGScheduler: Submitting ResultStage 3 (MapPartitionsRDD[16] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0), which has no missing parents
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 46.9 KiB, free 433.9 MiB)
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 20.6 KiB, free 433.8 MiB)
25/06/25 09:46:04 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1676
25/06/25 09:46:04 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 3 (MapPartitionsRDD[16] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0) (first 15 tasks are for partitions Vector(0))
25/06/25 09:46:04 INFO TaskSchedulerImpl: Adding task set 3.0 with 1 tasks resource profile 0
25/06/25 09:46:04 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 2) (699770225a2d,executor driver, partition 0, NODE_LOCAL, 9631 bytes) 
25/06/25 09:46:04 INFO Executor: Running task 0.0 in stage 3.0 (TID 2)
25/06/25 09:46:04 INFO ShuffleBlockFetcherIterator: Getting 1 (216.0 B) non-empty blocks including 1 (216.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks
25/06/25 09:46:04 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 17 ms
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 28.057077 ms
25/06/25 09:46:04 INFO Executor: Finished task 0.0 in stage 3.0 (TID 2). 4982 bytes result sent to driver
25/06/25 09:46:04 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 2) in 118 ms on 699770225a2d (executor driver) (1/1)
25/06/25 09:46:04 INFO TaskSchedulerImpl: Removed TaskSet 3.0 whose tasks have all completed, from pool 
25/06/25 09:46:04 INFO DAGScheduler: ResultStage 3 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 131 ms
25/06/25 09:46:04 INFO DAGScheduler: Job 2 is finished. Cancelling potential speculative or zombie tasks for this job
25/06/25 09:46:04 INFO TaskSchedulerImpl: Canceling stage 3
25/06/25 09:46:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 3: Stage finished
25/06/25 09:46:04 INFO DAGScheduler: Job 2 finished: $anonfun$withThreadLocalCaptured$2 at <unknown>:0, took 146.327143 ms
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 12.452558 ms
+---+--------------+
|age|average_salary|
+---+--------------+
| 40|       80000.0|
| 35|       67500.0|
| 30|       60000.0|
+---+--------------+

25/06/25 09:46:04 INFO FileSourceStrategy: Pushed Filters: 
25/06/25 09:46:04 INFO FileSourceStrategy: Post-Scan Filters: Set()
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 214.0 KiB, free 433.6 MiB)
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 38.5 KiB, free 433.6 MiB)
25/06/25 09:46:04 INFO SparkContext: Created broadcast 6 from $anonfun$withThreadLocalCaptured$2 at <unknown>:0
25/06/25 09:46:04 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
25/06/25 09:46:04 INFO DAGScheduler: Registering RDD 20 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) as input to shuffle 1
25/06/25 09:46:04 INFO DAGScheduler: Got map stage job 3 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) with 1 output partitions
25/06/25 09:46:04 INFO DAGScheduler: Final stage: ShuffleMapStage 4 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0)
25/06/25 09:46:04 INFO DAGScheduler: Parents of final stage: List()
25/06/25 09:46:04 INFO DAGScheduler: Missing parents: List()
25/06/25 09:46:04 INFO DAGScheduler: Submitting ShuffleMapStage 4 (MapPartitionsRDD[20] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0), which has no missing parents
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 43.1 KiB, free 433.6 MiB)
25/06/25 09:46:04 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 19.4 KiB, free 433.5 MiB)
25/06/25 09:46:04 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1676
25/06/25 09:46:04 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 4 (MapPartitionsRDD[20] at $anonfun$withThreadLocalCaptured$2 at <unknown>:0) (first 15 tasks are for partitions Vector(0))
25/06/25 09:46:04 INFO TaskSchedulerImpl: Adding task set 4.0 with 1 tasks resource profile 0
25/06/25 09:46:04 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 3) (699770225a2d,executor driver, partition 0, PROCESS_LOCAL, 10214 bytes) 
25/06/25 09:46:04 INFO Executor: Running task 0.0 in stage 4.0 (TID 3)
25/06/25 09:46:04 INFO FileScanRDD: Reading File path: file:///home/build/salaries.csv, range: 0-88, partition values: [empty row]
25/06/25 09:46:04 INFO Executor: Finished task 0.0 in stage 4.0 (TID 3). 2737 bytes result sent to driver
25/06/25 09:46:04 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 3) in 47 ms on 699770225a2d (executor driver) (1/1)
25/06/25 09:46:04 INFO TaskSchedulerImpl: Removed TaskSet 4.0 whose tasks have all completed, from pool 
25/06/25 09:46:04 INFO DAGScheduler: ShuffleMapStage 4 ($anonfun$withThreadLocalCaptured$2 at <unknown>:0) finished in 56 ms
25/06/25 09:46:04 INFO DAGScheduler: looking for newly runnable stages
25/06/25 09:46:04 INFO DAGScheduler: running: HashSet()
25/06/25 09:46:04 INFO DAGScheduler: waiting: HashSet()
25/06/25 09:46:04 INFO DAGScheduler: failed: HashSet()
25/06/25 09:46:04 INFO ShufflePartitionsUtil: For shuffle(1, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/06/25 09:46:04 INFO PathOutputCommitterFactory: No output committer factory defined, defaulting to FileOutputCommitterFactory
25/06/25 09:46:04 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
25/06/25 09:46:04 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
25/06/25 09:46:04 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
25/06/25 09:46:04 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
25/06/25 09:46:04 INFO CodeGenerator: Code generated in 24.063837 ms

Indexes

https://apk.cgr.dev/chainguard-2.28-presubmit/8f1e6522e0e7613076d27e68dce6a95d9b5ee680

Packages

Tests

More Observability

Command

cg build log \
  --build-id 1a8acc0c-4edb-47bf-9247-41a0dea0cad1 \
  --project prod-eco-8de7 \
  --cluster elastic-pre \
  --namespace pre-eco-2-28 \
  --start 2025-06-25T09:09:08Z \
  --end 2025-06-25T09:56:08Z \
  --attrs pkg,arch