File tree Expand file tree Collapse file tree 1 file changed +8
-8
lines changed
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate Expand file tree Collapse file tree 1 file changed +8
-8
lines changed Original file line number Diff line number Diff line change @@ -504,14 +504,14 @@ case class HashAggregateExec(
504504 }
505505
506506 /**
507- * Using the row-based hash map in HashAggregate is currently supported for all primitive
508- * data types during partial aggregation. However, we currently only enable the hash map for a
509- * subset of cases that've been verified to show performance improvements on our benchmarks
510- * subject to an internal conf that sets an upper limit on the maximum length of the aggregate
511- * key/value schema.
512- *
513- * This list of supported use-cases should be expanded over time.
514- */
507+ * Using the row-based hash map in HashAggregate is currently supported for all primitive
508+ * data types during partial aggregation. However, we currently only enable the hash map for a
509+ * subset of cases that've been verified to show performance improvements on our benchmarks
510+ * subject to an internal conf that sets an upper limit on the maximum length of the aggregate
511+ * key/value schema.
512+ *
513+ * This list of supported use-cases should be expanded over time.
514+ */
515515 private def enableRowBasedHashMap (ctx : CodegenContext ): Boolean = {
516516 val isSupported =
517517 (groupingKeySchema ++ bufferSchema).forall(f => ctx.isPrimitiveType(f.dataType) ||
You can’t perform that action at this time.
0 commit comments