[SPARK-30810][SQL] Parses and convert a CSV Dataset having different column from 'value' in csv(dataset) API#27561
Closed
HyukjinKwon wants to merge 2 commits intoapache:masterfrom
Closed
[SPARK-30810][SQL] Parses and convert a CSV Dataset having different column from 'value' in csv(dataset) API#27561HyukjinKwon wants to merge 2 commits intoapache:masterfrom
HyukjinKwon wants to merge 2 commits intoapache:masterfrom
Conversation
… in csv(dataset) API
MaxGekk
approved these changes
Feb 13, 2020
|
Test build #118356 has finished for PR 27561 at commit
|
HyukjinKwon
commented
Feb 14, 2020
| // with the one below, `filterCommentAndEmpty` but execution path is different. One of them | ||
| // might have to be removed in the near future if possible. | ||
| import lines.sqlContext.implicits._ | ||
| val nonEmptyLines = lines.filter(length(trim($"value")) > 0) |
Member
Author
There was a problem hiding this comment.
@MaxGekk and @cloud-fan, I came up with a better idea to avoid relying on string format in col. Can you take a look again? I think this way is safer.
|
Test build #118386 has finished for PR 27561 at commit
|
Contributor
|
LGTM, merging to master/3.0! |
cloud-fan
pushed a commit
that referenced
this pull request
Feb 14, 2020
…column from 'value' in csv(dataset) API
### What changes were proposed in this pull request?
This PR fixes `DataFrameReader.csv(dataset: Dataset[String])` API to take a `Dataset[String]` originated from a column name different from `value`. This is a long-standing bug started from the very first place.
`CSVUtils.filterCommentAndEmpty` assumed the `Dataset[String]` to be originated with `value` column. This PR changes to use the first column name in the schema.
### Why are the changes needed?
For `DataFrameReader.csv(dataset: Dataset[String])` to support any `Dataset[String]` as the signature indicates.
### Does this PR introduce any user-facing change?
Yes,
```scala
val ds = spark.range(2).selectExpr("concat('a,b,', id) AS text").as[String]
spark.read.option("header", true).option("inferSchema", true).csv(ds).show()
```
Before:
```
org.apache.spark.sql.AnalysisException: cannot resolve '`value`' given input columns: [text];;
'Filter (length(trim('value, None)) > 0)
+- Project [concat(a,b,, cast(id#0L as string)) AS text#2]
+- Range (0, 2, step=1, splits=Some(2))
```
After:
```
+---+---+---+
| a| b| 0|
+---+---+---+
| a| b| 1|
+---+---+---+
```
### How was this patch tested?
Unittest was added.
Closes #27561 from HyukjinKwon/SPARK-30810.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
(cherry picked from commit 2a270a7)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Member
Author
|
Thanks! |
sjincho
pushed a commit
to sjincho/spark
that referenced
this pull request
Apr 15, 2020
…column from 'value' in csv(dataset) API
### What changes were proposed in this pull request?
This PR fixes `DataFrameReader.csv(dataset: Dataset[String])` API to take a `Dataset[String]` originated from a column name different from `value`. This is a long-standing bug started from the very first place.
`CSVUtils.filterCommentAndEmpty` assumed the `Dataset[String]` to be originated with `value` column. This PR changes to use the first column name in the schema.
### Why are the changes needed?
For `DataFrameReader.csv(dataset: Dataset[String])` to support any `Dataset[String]` as the signature indicates.
### Does this PR introduce any user-facing change?
Yes,
```scala
val ds = spark.range(2).selectExpr("concat('a,b,', id) AS text").as[String]
spark.read.option("header", true).option("inferSchema", true).csv(ds).show()
```
Before:
```
org.apache.spark.sql.AnalysisException: cannot resolve '`value`' given input columns: [text];;
'Filter (length(trim('value, None)) > 0)
+- Project [concat(a,b,, cast(id#0L as string)) AS text#2]
+- Range (0, 2, step=1, splits=Some(2))
```
After:
```
+---+---+---+
| a| b| 0|
+---+---+---+
| a| b| 1|
+---+---+---+
```
### How was this patch tested?
Unittest was added.
Closes apache#27561 from HyukjinKwon/SPARK-30810.
Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This PR fixes
DataFrameReader.csv(dataset: Dataset[String])API to take aDataset[String]originated from a column name different fromvalue. This is a long-standing bug started from the very first place.CSVUtils.filterCommentAndEmptyassumed theDataset[String]to be originated withvaluecolumn. This PR changes to use the first column name in the schema.Why are the changes needed?
For
DataFrameReader.csv(dataset: Dataset[String])to support anyDataset[String]as the signature indicates.Does this PR introduce any user-facing change?
Yes,
Before:
After:
How was this patch tested?
Unittest was added.