Skip to content

Comments

[SPARK-30810][SQL] Parses and convert a CSV Dataset having different column from 'value' in csv(dataset) API#27561

Closed
HyukjinKwon wants to merge 2 commits intoapache:masterfrom
HyukjinKwon:SPARK-30810
Closed

[SPARK-30810][SQL] Parses and convert a CSV Dataset having different column from 'value' in csv(dataset) API#27561
HyukjinKwon wants to merge 2 commits intoapache:masterfrom
HyukjinKwon:SPARK-30810

Conversation

@HyukjinKwon
Copy link
Member

@HyukjinKwon HyukjinKwon commented Feb 13, 2020

What changes were proposed in this pull request?

This PR fixes DataFrameReader.csv(dataset: Dataset[String]) API to take a Dataset[String] originated from a column name different from value. This is a long-standing bug started from the very first place.

CSVUtils.filterCommentAndEmpty assumed the Dataset[String] to be originated with value column. This PR changes to use the first column name in the schema.

Why are the changes needed?

For DataFrameReader.csv(dataset: Dataset[String]) to support any Dataset[String] as the signature indicates.

Does this PR introduce any user-facing change?

Yes,

val ds = spark.range(2).selectExpr("concat('a,b,', id) AS text").as[String]
spark.read.option("header", true).option("inferSchema", true).csv(ds).show()

Before:

org.apache.spark.sql.AnalysisException: cannot resolve '`value`' given input columns: [text];;
'Filter (length(trim('value, None)) > 0)
+- Project [concat(a,b,, cast(id#0L as string)) AS text#2]
   +- Range (0, 2, step=1, splits=Some(2))

After:

+---+---+---+
|  a|  b|  0|
+---+---+---+
|  a|  b|  1|
+---+---+---+

How was this patch tested?

Unittest was added.

@SparkQA
Copy link

SparkQA commented Feb 13, 2020

Test build #118356 has finished for PR 27561 at commit adb9b22.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

// with the one below, `filterCommentAndEmpty` but execution path is different. One of them
// might have to be removed in the near future if possible.
import lines.sqlContext.implicits._
val nonEmptyLines = lines.filter(length(trim($"value")) > 0)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MaxGekk and @cloud-fan, I came up with a better idea to avoid relying on string format in col. Can you take a look again? I think this way is safer.

@SparkQA
Copy link

SparkQA commented Feb 14, 2020

Test build #118386 has finished for PR 27561 at commit 4e9ddf1.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

LGTM, merging to master/3.0!

@cloud-fan cloud-fan closed this in 2a270a7 Feb 14, 2020
cloud-fan pushed a commit that referenced this pull request Feb 14, 2020
…column from 'value' in csv(dataset) API

### What changes were proposed in this pull request?

This PR fixes `DataFrameReader.csv(dataset: Dataset[String])` API to take a `Dataset[String]` originated from a column name different from `value`. This is a long-standing bug started from the very first place.

`CSVUtils.filterCommentAndEmpty` assumed the `Dataset[String]` to be originated with `value` column. This PR changes to use the first column name in the schema.

### Why are the changes needed?

For  `DataFrameReader.csv(dataset: Dataset[String])` to support any `Dataset[String]` as the signature indicates.

### Does this PR introduce any user-facing change?
Yes,

```scala
val ds = spark.range(2).selectExpr("concat('a,b,', id) AS text").as[String]
spark.read.option("header", true).option("inferSchema", true).csv(ds).show()
```

Before:

```
org.apache.spark.sql.AnalysisException: cannot resolve '`value`' given input columns: [text];;
'Filter (length(trim('value, None)) > 0)
+- Project [concat(a,b,, cast(id#0L as string)) AS text#2]
   +- Range (0, 2, step=1, splits=Some(2))
```

After:

```
+---+---+---+
|  a|  b|  0|
+---+---+---+
|  a|  b|  1|
+---+---+---+
```

### How was this patch tested?

Unittest was added.

Closes #27561 from HyukjinKwon/SPARK-30810.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
(cherry picked from commit 2a270a7)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
@HyukjinKwon
Copy link
Member Author

Thanks!

@HyukjinKwon HyukjinKwon deleted the SPARK-30810 branch March 3, 2020 01:16
sjincho pushed a commit to sjincho/spark that referenced this pull request Apr 15, 2020
…column from 'value' in csv(dataset) API

### What changes were proposed in this pull request?

This PR fixes `DataFrameReader.csv(dataset: Dataset[String])` API to take a `Dataset[String]` originated from a column name different from `value`. This is a long-standing bug started from the very first place.

`CSVUtils.filterCommentAndEmpty` assumed the `Dataset[String]` to be originated with `value` column. This PR changes to use the first column name in the schema.

### Why are the changes needed?

For  `DataFrameReader.csv(dataset: Dataset[String])` to support any `Dataset[String]` as the signature indicates.

### Does this PR introduce any user-facing change?
Yes,

```scala
val ds = spark.range(2).selectExpr("concat('a,b,', id) AS text").as[String]
spark.read.option("header", true).option("inferSchema", true).csv(ds).show()
```

Before:

```
org.apache.spark.sql.AnalysisException: cannot resolve '`value`' given input columns: [text];;
'Filter (length(trim('value, None)) > 0)
+- Project [concat(a,b,, cast(id#0L as string)) AS text#2]
   +- Range (0, 2, step=1, splits=Some(2))
```

After:

```
+---+---+---+
|  a|  b|  0|
+---+---+---+
|  a|  b|  1|
+---+---+---+
```

### How was this patch tested?

Unittest was added.

Closes apache#27561 from HyukjinKwon/SPARK-30810.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants