Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better handling of string lexing #140

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

chewxy
Copy link

@chewxy chewxy commented Dec 23, 2020

The Problem

Given a schema like this:

type Post {
    id: ID!
    raw: String!
}

Here's a way to break a mutation:

mutation MyMutation {
  addPost(input: {raw: "\xaa Raw content"}) {
    numUids
  }
}

giving rise to the following error:

input:3: Unexpected <Invalid>

The issue is that \xaa is not parsed correctly.

The Solution

The solution is quite simple: add a case to the readString() method of the lexer to handle x as a escape character.

Side Note: How Parsing of Strings Should Work

Please note that the handling of bad inputs for \xHH (where H is a meta-variable standing in for a hexadecimal number) is not quite the same as elsewhere in the package. I've got a good reason for this, and I am willing to make changes to the other escape sequences as well.

With this PR, the bad inputs will lead to the literals being returned - so that "\xZZ" will return "\xZZ", while good inputs such as "\xaa" will be parsed correctly, returning "ª".

I believe this is more user friendly.

In the example I had listed, the scenario is one where the server is receiving a mutation. The input string, could be an end user input. And end users do often type the wrong things.

For example, consider a dyslexic person trying to write the sentence "they will give it to me/u". Said person would often type something along the lines of "they will give it to me\u". In this case, the extra parsing for UTF-8 characters in the string will cause this input to fail. What the user meant to type, in a string representation, is "they will give it to me\\u".

An argument could be made that it is the onus of the user of this library to escape the string "they will give it to me\u" to "they will give it to me\\u". My counter argument is that the role of a lexer is to simply find tokens. A string token that contains the literal bytes in "they will give it to me\u" would qualify as a string token. That gqlparser goes above and beyond in order to parse out the UTF8 characters in the contents of a string is most commendable.

But it should not return an error. In the example I have given so far, it would be very unfriendly to the end user, as well as the user of the library.

There can be a further argument to be made - that having the user of this library parse the string and escape any invalid sequences would be extra computation wasted. Now as a total, the program has to go through the string twice - once to escape bad sequences (done by the user of this library), and the second, to parse the correct escape sequences into UTF8 chars (done by this library). If we could save computation by doing all at once, we would make the world a much nicer place.

As I mentioned, I am willing to put the changes in for handling the rest of the bad escape sequences. I am also OK if you want to just keep returning errors for string parsing, and will modify this PR. Your call @vektah .

End Notes

This was originally filed as an issue in Dgraph's community forum: https://discuss.dgraph.io/t/graphql-string-semantics-unclear-and-inconsistent-in-the-presence-of-escape-sequences/12019

@coveralls
Copy link

coveralls commented Dec 23, 2020

Coverage Status

Coverage decreased (-0.03%) to 92.248% when pulling 9de70ae on chewxy:master into 7e475a7 on vektah:master.

@abhimanyusinghgaur
Copy link
Contributor

Hi @lwc @vvakame,

Tagging you guys for PR review. Need your blessings :)

Thanks

Copy link

@JatinDev543 JatinDev543 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!!
After this PR we should handle cases for other bad escape sequences also.
Hi @lwc @vvakame, a gentle reminder to review this PR.
Thanks.

@chewxy
Copy link
Author

chewxy commented Jan 12, 2021

Please also review the notes in the PR message above and give me an idea what the plans are wrt handling bad inputs.

@JatinDev543
Copy link

JatinDev543 commented Jan 12, 2021

Please also review the notes in the PR message above and give me an idea what the plans are wrt handling bad inputs.

Some clarifications needed.
1.Why can't we treat everything as literal. Like instead of processing for example characters after "\x", if we just accept everything and left it to user how he process this. Because from our end we need to accept and store the data.
And for other bad escape sequences also "\v","\0" etc., if we just accept them and store instead of throwing error.

2.I also have doubt in your implementation.

you are converting two characters after "\x" to haxadecimal if they are valid,otherwise treat them as literal. What  If someone wants to treat valid input as literal instead of converting them into hex, for example 
"\xaa"  should be "\xaa" not ª . is it valid requirement ?

@StevenACoffman
Copy link
Collaborator

@chewxy Sorry for the long delay. I'm looking this over, but I'm happy to merge this if you address the concerns of @JatinDev543

speedoops added a commit to speedoops/gqlparser that referenced this pull request Jul 9, 2022
Better handling of string lexing vektah#140
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants