You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
302 Total
156 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
22 DocTestFailure
14 AssertionError: AnalysisException not raised
14 UnsupportedOperationException: lambda function
13 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
10 handle add artifacts
9 UnsupportedOperationException: hint
6 AssertionError: False is not true
6 UnsupportedOperationException: PlanNode::CacheTable
6 UnsupportedOperationException: function: window
5 UnsupportedOperationException: function: monotonically_increasing_id
4 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
4 AssertionError: Attributes of DataFrame.iloc[:, 7] (column name="8_timestamp_t") are different
4 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd() is not implemented.
4 UnsupportedOperationException: function: input_file_name
4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
4 UnsupportedOperationException: unpivot
3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.5/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
3 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="time") are different
3 UnsupportedOperationException: handle analyze input files
3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
2 AnalysisException: Could not find config namespace "spark"
2 AnalysisException: No table format found for: orc
2 AnalysisException: not supported: list functions
2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
2 AssertionError
2 AssertionError: AnalysisException not raised by <lambda>
2 AssertionError: Lists differ: [Row([22 chars](key=1, value='1'), Row(key=10, value='10'), R[2402 chars]99')] != [Row([22 chars](key=0, value='0'), Row(key=1, value='1'), Row[4882 chars]99')]
2 IllegalArgumentException: expected value at line 1 column 1
2 IllegalArgumentException: invalid argument: found FUNCTION at 5:13 expected 'DATABASE', 'SCHEMA', 'TABLE', 'VIEW', 'TEMP', 'TEMPORARY', or 'FUNCTIONS'
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@6jabwh68iuheayv3cstdmwtmp(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
2 UnsupportedOperationException: approx quantile
2 UnsupportedOperationException: collect metrics
2 UnsupportedOperationException: freq items
2 UnsupportedOperationException: function: array_sort
2 UnsupportedOperationException: function: format_number
2 UnsupportedOperationException: function: from_json
2 UnsupportedOperationException: function: schema_of_csv
2 UnsupportedOperationException: handle analyze is local
2 UnsupportedOperationException: handle analyze same semantics
2 UnsupportedOperationException: pivot
2 UnsupportedOperationException: unresolved regex
2 UnsupportedOperationException: user defined data type should only exist in a field
2 UnsupportedOperationException: with watermark
2 handle artifact statuses
2 received metadata size exceeds hard limit (19714 vs. 16384); :status:42B content-type:60B grpc-status:45B grpc-message:9084B grpc-status-details-bin:7876B ValueError: Code in Status proto (StatusCode...
1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
1 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.5/lib/python3.11/site-packages/pyspark/sql/functions.py
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmp08dw05pu/text-0.text, file:///tmp/tmp08dw05pu/text-1.text, file:///tmp/tmp08dw05pu/text-2.text
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmp8a8bcfqs/
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmprfr2s2ek/
(+1) 1 AnalysisException: UNION queries have different number of columns: left has 2 columns whereas right has 3 columns
1 AnalysisException: not supported: function exists
1 AnalysisException: table already exists: tbl1
1 AnalysisException: temporary view not found: tab2
1 AssertionError: "2000000" does not match "raise_error expects a single UTF-8 string argument"
(+1) 1 AssertionError: "Database 'memory:809af789-aacf-4d65-a2a5-65bce3cff228' dropped." does not match "No table format found for: jdbc"
(+1) 1 AssertionError: "Database 'memory:a0778c8f-dde7-46d5-8376-2737f5c1cc1c' dropped." does not match "No table format found for: jdbc"
1 AssertionError: "attribute.*missing" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
1 AssertionError: '+---[102 chars] c -> NULL}|\n| {c -> de}|\n+--------------------+\n' != '+---[102 chars] c -> NULL}|\n| NULL|\n+--------------------+\n'
1 AssertionError: '+---[17 chars]-----+\n| x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
1 AssertionError: '4.0.0' != '3.5.5'
1 AssertionError: 2 != 6
1 AssertionError: ArrayIndexOutOfBoundsException not raised
1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="a") are different
1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="ts") are different
1 AssertionError: Exception not raised
1 AssertionError: Exception not raised by <lambda>
1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
1 AssertionError: Lists differ: [Row(ln(id)=0.0, ln(id)=0.0, struct(id, name)=Row(id=[1232 chars]0'))] != [Row(ln(id)=4.31748811353631, ln(id)=4.31748811353631[1312 chars]4'))]
1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
1 AssertionError: Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), False)]) != Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), True)])
1 AssertionError: Struc[69 chars]e(), True), StructField('name', StringType(), True)]), True)]) != Struc[69 chars]e(), True), StructField('name', StringType(), True)]), False)])
1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
1 AssertionError: dtype('<M8[us]') != 'datetime64[ns]'
1 AttributeError: 'DataFrame' object has no attribute '_ipython_key_completions_'
1 AttributeError: 'DataFrame' object has no attribute '_joinAsOf'
1 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreach() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreachPartition() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] localCheckpoint() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] sparkContext() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
1 PythonException: AttributeError: 'NoneType' object has no attribute 'partitionId'
1 PythonException: AttributeError: 'list' object has no attribute 'x'
1 PythonException: AttributeError: 'list' object has no attribute 'y'
1 SparkRuntimeException: Cast error: Cannot cast string 'abc' to value of Float64 type
1 SparkRuntimeException: Cast error: Cannot cast to Decimal128(30, 15). Overflowing on NaN
1 SparkRuntimeException: Invalid argument error: 83140 is too large to store in a Decimal128 of precision 4. Max is 9999
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Field { name: "item", data_type: Int64, nullable: true, dict_id: 0, dict_is_ordered: ...
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
1 SparkRuntimeException: Parser error: Error parsing timestamp from '1997/02/28 10:30:00': error parsing date
1 SparkRuntimeException: Parser error: Error while parsing value '0
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@6jabwh68iuheayv3cstdmwtmp(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@6jabwh68iuheayv3cstdmwtmp(plus_one@3fz6es6jadwlqp420cnsho3cq(#9)) PART...
1 UnsupportedOperationException: COUNT DISTINCT with multiple arguments
1 UnsupportedOperationException: PlanNode::ClearCache
1 UnsupportedOperationException: PlanNode::IsCached
1 UnsupportedOperationException: PlanNode::RecoverPartitions
1 UnsupportedOperationException: SHOW FUNCTIONS
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
1 UnsupportedOperationException: bucketing for writing listing table format
1 UnsupportedOperationException: deduplicate within watermark
1 UnsupportedOperationException: function: format_string
1 UnsupportedOperationException: function: java_method
1 UnsupportedOperationException: function: json_tuple
1 UnsupportedOperationException: function: printf
1 UnsupportedOperationException: function: reflect
1 UnsupportedOperationException: function: regexp_extract
1 UnsupportedOperationException: function: regexp_extract_all
1 UnsupportedOperationException: function: regexp_substr
1 UnsupportedOperationException: function: schema_of_json
1 UnsupportedOperationException: function: sentences
1 UnsupportedOperationException: function: session_window
1 UnsupportedOperationException: function: soundex
1 UnsupportedOperationException: function: spark_partition_id
1 UnsupportedOperationException: function: to_char
1 UnsupportedOperationException: function: to_csv
1 UnsupportedOperationException: function: to_json
1 UnsupportedOperationException: function: to_varchar
1 UnsupportedOperationException: function: xpath
1 UnsupportedOperationException: function: xpath_boolean
1 UnsupportedOperationException: function: xpath_double
1 UnsupportedOperationException: function: xpath_float
1 UnsupportedOperationException: function: xpath_int
1 UnsupportedOperationException: function: xpath_long
1 UnsupportedOperationException: function: xpath_number
1 UnsupportedOperationException: function: xpath_short
1 UnsupportedOperationException: function: xpath_string
1 UnsupportedOperationException: handle analyze semantic hash
1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
1 UnsupportedOperationException: unknown aggregate function: grouping_id
1 UnsupportedOperationException: unknown aggregate function: histogram_numeric
1 UnsupportedOperationException: unknown aggregate function: percentile
1 UnsupportedOperationException: unknown aggregate function: try_avg
1 UnsupportedOperationException: unknown aggregate function: try_sum
1 UnsupportedOperationException: unknown function: distributed_sequence_id
1 UnsupportedOperationException: unknown function: product
1 ValueError: Code in Status proto (StatusCode.INTERNAL) doesn't match status code (StatusCode.RESOURCE_EXHAUSTED)
1 ValueError: The column label 'id' is not unique.
1 ValueError: The column label 'struct' is not unique.
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpju_4pcpx/
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpmhr5xys4/text-0.text, file:///tmp/tmpmhr5xys4/text-1.text, file:///tmp/tmpmhr5xys4/text-2.text
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpvz9foku0/
(-1) 0 AnalysisException: UNION queries have different number of columns: left has 3 columns whereas right has 2 columns
(-1) 0 AssertionError: "Database 'memory:59bd6dee-1802-416e-bb68-11f5f9920e77' dropped." does not match "No table format found for: jdbc"
(-1) 0 AssertionError: "Database 'memory:e5fa4550-fdec-44b9-b8c8-14323bfbbed3' dropped." does not match "No table format found for: jdbc"
(+1) 673 Total
(+1) 241 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
41 IllegalArgumentException: missing argument: Python UDTF return type
25 register data source command
24 DocTestFailure
23 UnsupportedOperationException: function: parse_json
23 UnsupportedOperationException: with relations
18 UnsupportedOperationException: handle add artifacts
18 UnsupportedOperationException: named argument expression
14 AssertionError: 1 != 0 : dict_keys([])
14 UnsupportedOperationException: lambda function
13 UnsupportedOperationException: variant data type
12 AssertionError: AnalysisException not raised
12 AssertionError: False is not true
12 IllegalArgumentException: expected value at line 1 column 1
12 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
12 UnsupportedOperationException: unresolved table valued function
11 IllegalArgumentException: invalid argument: expected function for lateral table factor
10 UnsupportedOperationException: hint
10 UnsupportedOperationException: lateral join
9 AssertionError: 3 != 0 : []
8 AssertionError
6 UnsupportedOperationException: PlanNode::CacheTable
6 UnsupportedOperationException: collect metrics
6 UnsupportedOperationException: function: spark_partition_id
6 UnsupportedOperationException: function: window
6 UnsupportedOperationException: handle analyze is local
5 AssertionError: `query_context_type` is required when QueryContext exists. QueryContext: [].
5 UnsupportedOperationException: named function arguments
5 UnsupportedOperationException: unpivot
5 UnsupportedOperationException: user defined data type should only exist in a field
5 checkpoint command
4 AnalysisException: temporary view not found: t2
4 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
4 AssertionError: AnalysisException not raised by <lambda>
4 PythonException: PySparkRuntimeError: [UDTF_EVAL_METHOD_ARGUMENTS_DO_NOT_MATCH_SIGNATURE] Failed to evaluate the user-defined table function '' because the function arguments did not match the expect...
4 UnsupportedOperationException: approx quantile
4 UnsupportedOperationException: function: input_file_name
4 UnsupportedOperationException: function: monotonically_increasing_id
4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.0.0/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
3 AssertionError: 1 != 0
3 IllegalArgumentException: invalid argument: extraction must be a literal
3 UnsupportedOperationException: function: from_json
3 UnsupportedOperationException: function: shuffle
3 UnsupportedOperationException: handle analyze input files
3 UnsupportedOperationException: pivot
3 UnsupportedOperationException: transpose
3 UnsupportedOperationException: unknown table function: IDENTIFIER
3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
2 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
2 AnalysisException: Invalid Python user-defined table function return type. Expect a struct type, but got Int32.
2 AnalysisException: No table format found for: orc
2 AnalysisException: ambiguous attribute: ObjectName([Identifier("id")])
2 AnalysisException: not supported: list functions
2 AnalysisException: temporary view not found: variant_table
2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
2 AssertionError: 3 != 0 : dict_keys([])
2 AssertionError: unexpectedly None
2 IllegalArgumentException: invalid argument: found FUNCTION at 5:13 expected 'DATABASE', 'SCHEMA', 'TABLE', 'VIEW', 'TEMP', 'TEMPORARY', or 'FUNCTIONS'
2 IllegalArgumentException: invalid argument: found PARTITION at 281:290 expected ',', or ')'
2 IllegalArgumentException: invalid argument: found PARTITION at 295:304 expected ',', or ')'
2 IllegalArgumentException: invalid argument: found PARTITION at 59:68 expected ',', or ')'
2 IllegalArgumentException: invalid argument: found WITH at 171:175 expected ',', or ')'
2 IllegalArgumentException: invalid argument: found WITH at 279:283 expected ',', or ')'
2 PySparkAssertionError: [DIFFERENT_ROWS] Results do not match: ( 99.50000 % )
2 PythonException: AssertionError: assert None is not None
2 PythonException: AttributeError: 'NoneType' object has no attribute 'cpus'
2 PythonException: KeyError: 'a'
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@1lyeq07trczbg3c9ihccuvt1t(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: mean_udf@2yjkzpz26kl3afqanhvf3n18m(#3) PARTITION BY [#2] ORDER BY [#3 ASC ...
2 UnsupportedOperationException: CLUSTER BY for write
2 UnsupportedOperationException: LATERAL JOIN with criteria
2 UnsupportedOperationException: Physical plan does not support logical expression Wildcard { qualifier: None, options: WildcardOptions { ilike: None, exclude: None, except: None, replace: None, rename:...
2 UnsupportedOperationException: freq items
2 UnsupportedOperationException: function: format_number
2 UnsupportedOperationException: function: from_xml
2 UnsupportedOperationException: function: randstr
2 UnsupportedOperationException: function: to_variant_object
2 UnsupportedOperationException: function: try_make_interval
2 UnsupportedOperationException: function: try_make_timestamp
2 UnsupportedOperationException: function: try_make_timestamp_ltz
2 UnsupportedOperationException: function: try_make_timestamp_ntz
2 UnsupportedOperationException: function: try_parse_json
2 UnsupportedOperationException: handle analyze same semantics
2 UnsupportedOperationException: unknown function: distributed_sequence_id
2 UnsupportedOperationException: unresolved regex
2 UnsupportedOperationException: wildcard with plan ID
2 UnsupportedOperationException: with watermark
2 create resource profile command
2 handle artifact statuses
2 received metadata size exceeds hard limit (19714 vs. 16384); :status:42B content-type:60B grpc-status:45B grpc-message:9084B grpc-status-details-bin:7876B ValueError: Code in Status proto (StatusCode...
1 AnalysisException: Could not find config namespace "mapred"
1 AnalysisException: Error parsing timestamp from '082017' using format '%m%Y': input is not enough for unique date and time
1 AnalysisException: Error parsing timestamp from '2014-31-12' using format '%Y-%d-%pa': input contains invalid characters
1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.0.0/lib/python3.11/site-packages/pyspark/sql/functions/builtin.py
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths1dyms_qku/text-0.text, file:///tmp/test_multi_paths1dyms_qku/text-1.text, file:///tmp/test_multi_paths1dyms_qku/te...
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmp9w7yb0t8/
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmpifyp7sp_/
1 AnalysisException: No table format found for: xml
1 AnalysisException: Schema contains duplicate unqualified field name "nth_value(#5,Int32(2)) RESPECT NULLS PARTITION BY [#3] ORDER BY [#4 ASC NULLS FIRST] RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ...
1 AnalysisException: UNION queries have different number of columns: left has 2 columns whereas right has 3 columns
1 AnalysisException: ambiguous attribute: ObjectName([Identifier("b")])
1 AnalysisException: ambiguous attribute: ObjectName([Identifier("i")])
1 AnalysisException: cannot resolve attribute: ObjectName([Identifier("x")])
1 AnalysisException: element_at expects List or Map type as first argument, got Null
1 AnalysisException: not supported: function exists
1 AnalysisException: table already exists: tbl1
1 AnalysisException: temporary view not found: tab2
1 AnalysisException: temporary view not found: v2
1 AnalysisException: too big
(+1) 1 AssertionError: "Database 'memory:23c16e92-9db5-4cbc-96cd-c9c6db9fc43a' dropped." does not match "No table format found for: jdbc"
(+1) 1 AssertionError: "Database 'memory:61dc4643-ac06-419f-b1b8-b4a6d06246b6' dropped." does not match "No table format found for: jdbc"
1 AssertionError: "Invalid return type" does not match " AttributeError: 'Series' object has no attribute 'columns'
1 AssertionError: "PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY" does not match "unknown function: years"
1 AssertionError: "UNRESOLVED_COLUMN.WITH_SUGGESTION" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
1 AssertionError: "requirement failed: Cogroup keys must have same size: 2 != 1" does not match "invalid argument: child plan grouping expressions must have the same length"
1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
1 AssertionError: '+---[102 chars] c -> NULL}|\n| {c -> de}|\n+--------------------+\n' != '+---[102 chars] c -> NULL}|\n| NULL|\n+--------------------+\n'
1 AssertionError: '+---[17 chars]-----+\n| x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
(+1) 1 AssertionError: '+---[177 chars] 1| 2|\n| 2| 3[75 chars]-+\n' != '+---[177 chars] 2| 2|\n| 2| 3[75 chars]-+\n'
1 AssertionError: '+---[23 chars]---+-----+\n| 1| 1|\n+---+-----+\nonly showing top 1 row' != '+---[23 chars]---+-----+\n| 1| 1|\n+---+-----+\nonly showing top 1 row\n'
1 AssertionError: 'deadbeef' is not None
1 AssertionError: 0 not greater than 0
1 AssertionError: 0.40248566366484795 != 0.9531453492357947 : Column<'rand(1)'>
1 AssertionError: 2 != 6
1 AssertionError: 6 != 0 : []
1 AssertionError: ArrayIndexOutOfBoundsException not raised
1 AssertionError: Exception not raised
1 AssertionError: Exception not raised by <lambda>
1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
1 AssertionError: Lists differ: [Row(ln(id)=0.0, ln(id)=0.0, struct(id, name)=Row(id=[1232 chars]0'))] != [Row(ln(id)=4.31748811353631, ln(id)=4.31748811353631[1312 chars]4'))]
1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
1 AssertionError: SparkConnectGrpcException not raised
1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
1 AssertionError: True is not false : Default URL is not secure
1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
1 AttributeError: 'NoneType' object has no attribute 'extract_graph'
1 AttributeError: 'NoneType' object has no attribute 'toText'
1 FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/sail/sail/.venvs/test-spark.spark-4.0.0/lib/python3.11/site-packages/pyspark/data/artifact-tests/junitLargeJar.jar'
(+1) 1 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpkqo9dob2'
1 IllegalArgumentException: data did not match any variant of untagged enum JsonDataType
1 IllegalArgumentException: invalid argument: empty data type
1 IllegalArgumentException: invalid argument: field not found in input schema: col1
1 IllegalArgumentException: invalid argument: found ( at 114:115 expected ':', data type, ',', or ')'
1 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
1 IllegalArgumentException: invalid argument: found collate at 13:20 expected 'AS', identifier, '(', ',', 'FROM', 'LATERAL', 'WHERE', 'GROUP', 'HAVING', 'INTERSECT', 'UNION', 'EXCEPT', 'MINUS', 'WINDOW'...
1 IllegalArgumentException: invalid argument: found something at 0:3 expected something else
1 IllegalArgumentException: invalid argument: grouping sets with grouping expressions
1 IllegalArgumentException: invalid argument: invalid user-defined window function type
1 IllegalArgumentException: invalid argument: table does not exist: ObjectName([Identifier("test_table")])
(+1) 1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (9002187b-7ae5-4fff-979f-64b778e6f73e != 739d5bd6-e015-4ba0-bef7-6...
(+1) 1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (b7f06666-68d2-4667-8649-bc034ebbed9a != 91a22e96-d49d-46b8-aed1-3...
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
1 PythonException: AttributeError: 'NoneType' object has no attribute 'partitionId'
1 PythonException: AttributeError: 'list' object has no attribute 'x'
1 PythonException: AttributeError: 'list' object has no attribute 'y'
1 PythonException: TypeError: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row).
1 SparkRuntimeException: Cast error: Cannot cast string 'abc' to value of Float64 type
1 SparkRuntimeException: Cast error: Cannot cast to Decimal128(30, 15). Overflowing on NaN
1 SparkRuntimeException: Compute error: Cannot perform a binary operation on arrays of different length
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Field { name: "item", data_type: Int64, nullable: true, dict_id: 0, dict_is_ordered: ...
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
1 SparkRuntimeException: Parser error: Error while parsing value '0
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@1lyeq07trczbg3c9ihccuvt1t(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@1lyeq07trczbg3c9ihccuvt1t(plus_one@6u0mz3nm98p9nxyf30cgvjry(#9)) PARTI...
1 UnsupportedOperationException: COUNT DISTINCT with multiple arguments
1 UnsupportedOperationException: PlanNode::ClearCache
1 UnsupportedOperationException: PlanNode::IsCached
1 UnsupportedOperationException: PlanNode::RecoverPartitions
1 UnsupportedOperationException: SHOW FUNCTIONS
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Struct(name Utf8, value Int64) is not implemented
1 UnsupportedOperationException: as of join
1 UnsupportedOperationException: bucketing for writing listing table format
1 UnsupportedOperationException: deduplicate within watermark
1 UnsupportedOperationException: function: array_sort
1 UnsupportedOperationException: function: collate
1 UnsupportedOperationException: function: collation
1 UnsupportedOperationException: function: format_string
1 UnsupportedOperationException: function: java_method
1 UnsupportedOperationException: function: json_tuple
1 UnsupportedOperationException: function: printf
1 UnsupportedOperationException: function: reflect
1 UnsupportedOperationException: function: regexp_extract
1 UnsupportedOperationException: function: regexp_extract_all
1 UnsupportedOperationException: function: regexp_substr
1 UnsupportedOperationException: function: schema_of_csv
1 UnsupportedOperationException: function: schema_of_json
1 UnsupportedOperationException: function: schema_of_xml
1 UnsupportedOperationException: function: sentences
1 UnsupportedOperationException: function: session_window
1 UnsupportedOperationException: function: soundex
1 UnsupportedOperationException: function: to_char
1 UnsupportedOperationException: function: to_csv
1 UnsupportedOperationException: function: to_json
1 UnsupportedOperationException: function: to_varchar
1 UnsupportedOperationException: function: to_xml
1 UnsupportedOperationException: function: try_reflect
1 UnsupportedOperationException: function: try_url_decode
1 UnsupportedOperationException: function: uniform
1 UnsupportedOperationException: function: xpath
1 UnsupportedOperationException: function: xpath_boolean
1 UnsupportedOperationException: function: xpath_double
1 UnsupportedOperationException: function: xpath_float
1 UnsupportedOperationException: function: xpath_int
1 UnsupportedOperationException: function: xpath_long
1 UnsupportedOperationException: function: xpath_number
1 UnsupportedOperationException: function: xpath_short
1 UnsupportedOperationException: function: xpath_string
1 UnsupportedOperationException: handle analyze json to ddl
1 UnsupportedOperationException: handle analyze semantic hash
1 UnsupportedOperationException: named window function arguments
1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
1 UnsupportedOperationException: unknown aggregate function: grouping_id
1 UnsupportedOperationException: unknown aggregate function: histogram_numeric
1 UnsupportedOperationException: unknown aggregate function: percentile
1 UnsupportedOperationException: unknown aggregate function: try_avg
1 UnsupportedOperationException: unknown aggregate function: try_sum
1 UnsupportedOperationException: unknown function: product
1 UnsupportedOperationException: unknown function: timestampadd
1 UnsupportedOperationException: unknown function: timestampdiff
1 ValueError: Code in Status proto (StatusCode.INTERNAL) doesn't match status code (StatusCode.RESOURCE_EXHAUSTED)
1 ValueError: The column label 'id' is not unique.
1 ValueError: The column label 'struct' is not unique.
1 failed to decode Protobuf message: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.i...
1 handle add artifacts
1 received metadata size exceeds hard limit (value length 25049 vs. 16384)
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths1gegom_hw/text-0.text, file:///tmp/test_multi_paths1gegom_hw/text-1.text, file:///tmp/test_multi_paths1gegom_hw/te...
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmp0c5mw0tg/
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpx4fmmv45/
(-1) 0 AssertionError: "Database 'memory:554b25f1-b5bc-4d24-bd18-488f22a9a88a' dropped." does not match "No table format found for: jdbc"
(-1) 0 AssertionError: "Database 'memory:d4cd58b5-2118-4001-a827-0275eb930cb6' dropped." does not match "No table format found for: jdbc"
(-1) 0 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpiiktnotk'
(-1) 0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (6f3149de-0642-4a3b-ac8e-36ae17896bec != 32bf8efd-e548-4424-9d9c-e...
(-1) 0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (bfc0b4d0-24d0-4faa-bf69-e5f68792271a != b119aba9-9eb2-4098-8a19-c...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #516.
This is a follow-up of #687.
Note:
parquet.cache_metadatais removed in apache/datafusion#17062 so we can no longer configure metadata cache for each data source.